3 Thursday Plenary

Wednesday Plenary

Current Meeting Report

IETF 74 Technical Plenary
26 March 2009

Introduction (Olaf Kolkman)

(Olaf) We are at the technical plenary.  For all the remote participants, we have a jabber room.  Presentations have been uploaded to the server.  When at the mic, please state your name clearly.  As for agenda, this is the welcome, then there will be the IRTF chair report, followed by my report.  We will then have a session about MPLS, and how that protocol just turned twelve.  We will be looking at it from a 'protocol success' perspective: "what can we learn from its history?"  After that we will have open mic session on the technical topic, followed by a general open mic for the IAB.  And with that, I invite Aaron to the stage.


IRTF Chair Report  (Aaron Falk)

(Aaron) I am Aaron Falk, chair of the IRTF.  A little status on the IRTF...  There were four RGs that met this week. DTN, HIP, and the P2P RGs have already met.  The RRG meets tomorrow for most of the day.  Representatives of the HIP RG also met this morning with the IAB, where we talked about the maturity and open issues of the HIP specifications. That was a very productive meeting.

For those of you interested in HIP, the most pressing need is to get experimental deployments in order to get some information on how well it works in larger networks.  There is more info on the HIP RG page.

Along with the rest of the IETF, we (the IRTF) have been looking at the issues with copyrights.  Our goal for the IRTF RFCs is to minimize the difficulty in having work move between the IRTF and IETF.  As a result, much of the boilerplate will be the same, and that should be helpful.

The IRTF has not published any RFCs since Minneapolis.  Three documents are waiting, but they are currently being held up by the publication of the Streams, Headers & Boilerplates, and 3932bis RFCs.

A few new RGs are under discussion.  Paul Hoffman is drafting up a charter on PKIng.  Then Martin Stiemerling is working on an RG on virtualization.

Here is a quick snapshot on how the different RGs are doing.  We have about 8 RGs that are doing well, and three that are quiet.  The TM RG should perk up shortly; Sally just retired as chair and we are transitioning to new chairs.  The SAM RG is planning to meet in Hiroshima.

Similar to the Minneapolis meeting, I will talk a little bit about the activity in a few RGs, so people can get an idea of the technical work (and maybe attend next time, if interested).

The Crypto Forum RG... the CF RG is really a little different than other RGs.  It is more of a discussion forum, a source of advice.  It is kind of an advisory body where there are researchers who otherwise do not come to IETF meetings.  They give advice on the use of mechanisms, e.g. MD5, HMAC, and try to convert new advances of theory into practice in the IETF.  They invite any IETF WGs with issues on these topics to bring them to the RG.  A few topics they have been looking at are MD5, SHA1.  These are showing signs that they may have weaker security properties than previously thought. Are they serving the needs of IETF protocols?  They also do doc reviews, for example, key exchange, looking at threshold crypto, etc.  Multi-party keying - where you have a private key, with multiple holders that work together to sign or decrypt.  This may be of application in DNSSEC.

As for the RRG, there are a lot of folks familiar with that work.  To review the problem, there is concern about the uncontrolled growth of the routing table in the internet.  There is a concern about the ability of routers to support this growth.  A major contributor is multihoming, where sites inject routes into multiple networks in a way that they are not able to be aggregated.  As more networks multi-home, the table is growing faster.  The RRG is looking at new architectures to control this overhead, and reduce dependency on the number of sites, but we don't want to lose the benefits of multihoming, or cause new problems (such as requiring site renumbering).  This makes deployment an important consideration.  We don't want to make the routing system less secure, or degrade convergence or stretch properties.  The group is converging on common terminology, has agreed upon a formulation of the problem, and are starting to move past that point.  One example is a proposal for a progressive approach to deploying a new architecture.  There is work going on outside IETF meetings.  For example, there was a workshop on naming and addressing.  The RRG goal is to provide a set of recommendations by this time next year.  That should be guidance for where a new arch might go, and lead to selection of new routing protocols.

That is all from me.  Any questions?  


IAB Chair Report (Olaf Kolkman)

(Olaf) I have not yet uploaded the IAB report to the server; it will be done as soon as the plenary is over.  This is my overview slide of the places where you can find information about the IAB. The IAB charter, overview...  Our homepage has an RSS feed where new articles are posted.  If you have an interest in what the IAB is doing, it is easy to get that feed.  Additional information is available in documents and minutes.  We have tried hard to catch up on meeting minutes, and have reduced the backlog from 7 months to 2 months.  Our goal is to have minutes posted with a 1-2 meeting backlog, but we are not quite there yet.
The link here is to the text formatted minutes - these are published before the html formatted minutes.  And there is a link to our correspondence.

We have had some document activity.  We recently submitted "Principles of Host Configuration", and it is in AUTH 48. Two weeks ago we submitted the "Design Choices" draft.  It has been in the IAB for a long time, and we finally agreed on content and balance of considerations

More document activity... There are two outstanding issues with "Stream Headers and Boilerplates", and there is the "RFC Editor Model" document.  That is still under review, and there is ongoing discussion of the RFC Editor model on the RFC-interest list.  Those are the docs that we are working at.  We are also trying to write up thoughts on NAT for IPv6;  That doc has just been published as draft version 00, and we would appreciate review and feedback.  Then there is the P2P doc.  It is also in version 00, and we would appreciate review of it.  Another doc to we are working to spin up again that has been dormant is the IETF protocol parameters.  Also on the list is the "Evolution of the IP Model".  That document was presented in the previous plenary.

On inter-organizational work , we continue to follow MPLS-TP.  We want to get those documents out of the IETF as soon as possible, but are concerned about the confusion with T-MPLS.  Also on the inter-organization front are liaison relations.  The last time I reported that we would be involved in the technical committee of the OECD, and that has now been established.  And we are busy with the selection of the ISOC BoT on behalf of the IETF.

We are involved in the review of the RFC Editor model.  The RFI was sent out, and a review of that RFI has caused some discussion about the clarity of the RFC Series Editor function.

Aaron was re-appointed as IRTF Chair.  The IAB upheld the decision on the appeal from JFC Morfin.  There are four new IAB members, and that also means there are four outgoing members.  I have been re-appointed as IAB chair, Dave Oran has been appointed as IESG liaison, Dow Street was re-appointed as IAB Exec Director.



(Olaf) A dutch song of departure...




(Olaf) These fine people did a outstanding job.  This is a token plaque, don't zoom in too much.  The real plaques will follow in the mail (so as not to break in luggage).  Lixia , thank you.  Barry, thank you.  Kurtis and Loa, thank you.  This is a token of gratitude from ISOC.  I now invite the speakers to the stage.

(Joe Touch) on the RFC Editor discussion, can you verify the list address on the slides was correct?

(Olaf) That discussion is on the rfc-interest list.  That is an error on the slides.  I will correct that before uploading them.


Technical Plenary

(Loa Andersson) The last time I stood here I said that that was an unnecessary report.  This time, I will talk about something we have been working on for a long time.

The first time I heard a talk about MPLS (without a name) was in 1996.  So It has been a number of years now, and in my slides I call it "MPLS a teenager".  We are going to talk about MPLS from the perspective of RFC 5218, how a protocol becomes successful.  MPLS is one of our wildly successful protocol suites.  To get lessons learned about MPLS development, we invited three engaged speakers who were there when it started, and when it was implemented and deployed.  This is not going to be a tutorial, but will use MPLS as an example of how to build a protocol.  We want to give you something to take away, to use, so as to increase the ability to talk about successful protocols in general.

George Swallow will give a brief history.  Tom Bechly will give an operator view, and Kireeti will give a vendor perspective.  First we will have presentations, and then questions to the panel, and then open mic.


1.  "MPLS history - MPLS becoming a teenager"
   George Swallow, Cisco (see slides plenaryt-3)

(George) I am here to talk about what led up to MPLS, and what made it successful.  First, some history...  Back in the late 80s there was ATDM, then ATM, and the idea of "switching on labels" idea came along.  It got a lot of funding, and was going to become the technology that covers the earth.  We tried to figure out how to put IP over ATM, how to join the two technologies.  It was a hard nut to crack.

Somewhere in there Tom Lyon (sun) had the thought that it should be a pure model where IP control the circuits. Then the first cell switched router was built in 1995 (Japan?).  It was a research thing.  By then the internet was flowing, and we said that "IP had won".  In 1996, ipsilon announced a flow based switch, an IP controlled cell switch.  Yakov said "great idea", but it should be frames not cells, and at the network layer.  So we had tag switching.

In the fall we had a BOF at MIT where we talked about this technology.  In Jan of 1997 the WG was formed, and the group took off.  As for the architecture, the first cool thing we did was to pry apart the control plane and the data plane.  What we did here was have many control planes that could control one simple forwarding plane.  We were not going to imbue these labels with a lot of semantics au priori.  We set a few special ones, but otherwise there are no semantics attached - that depends on the specific control plane:  QOS or not, unicast or mcast, etc.

The next big thing, which there was an argument about, was to decide on the number of labels.  ATM had two labels, and that was not enough.  Eric Rosen said "just make it a stack".  You can push labels on to mean different things, and they can come from different control planes.  Then they gain or lose labels as they go through the network.

We were not going to make this independent of other protocols; instead it would run on the same wire as IP.  For unicast forwarding, all we will do is give out labels that bind to an NLRI.  How they get bound to the forwarding plane is under the control of the router.

What has happened to make MPLS successful?  First, technology and layer convergence.  TE came along and supplanted the need for ATM and frame relay in the core, those being used to traffic engineer the network.  We wanted to be able to fill pipes as best as you could.  So, traffic engineering came along, then fast re-route came along and you didn't need redundancy at sonet anymore.  Now it is mostly running over ethernet, and in the future will running over optical.

In a funny twist of success, MPLS-TP was targeted as a replacement technology for SONET.  That was unexpected.

The last thing I wanted to talk about was service convergence on a single IP / MPLS core.  The major applications that are making money for ISPs on MPLS is virtualization:  L2 and L3 VPNs.  ISPs selling that as a service.

The other way that services have been proposed - MPLS was primarily a core technology.  Lately people have been pushing this out to the access links, eg.g. pseudowires.

In conclusion, some keys to success were to realize that IP had won.  People didn't want a new technology; they needed some other services in those boxes.  The drivers were IP services.  If you look inside L2 and ethernet, it is IP in there, and the services are all based on IP protocols.

From an architecture point of view, the keys to success are the simple forwarding paradigm, flexible semantics, stacking, and independence of the control plane.


2.  "Operator perspective - the why's , how's, and obstacles"
   Tom Bechly, Verizon Business/MCI (see slides plenaryt-4)

(Tom) I am with Verizon, and previously worked for MCI and BT.  I have worked on a lot of these networks.  This talk represents the work of many people.  Taking the operator perspective, we use MPLS as the core infrastructure, and have been successful using it in this manner.

At a high level, there are lots of extensions for MPLS, lots of work.  The idea was to move stuff faster, and in a deterministic fashion.  From a commercial perspective, the VPN space is quite large.  VPN service has exploited MPLS over the years.  Service providers use a lot of MPLS networks.  The benefits are that it gives you more control over latency and delay characterization.

Also, when putting on equipment, sometimes the growth is unmanageable.  Traffic engineering helps to bridge the gap. It lets you use equipment that is under utilized to protect equipment that is over utilized.  MPLS lets you measure a lot of statistics on lots of paths:  latency, symmetry, etc.  You can use those statistics to plan your network. You can have segregation for VPN type services.

UUNET was a frame-relay overlay, and when L2 became inadequate it moved to ATM.  The cost became high, and as a result MPLS was fairly quickly deployed.  Clearly it was used to control path selection in a deterministic manner.

The infrastructure is MPLS.  We have fairly notable networks that use MPLS.  The first was BBNS. It moved to MPLS in 1999, and other networks also moved.  We also used MPLS as the underpinning for L2 services.  One of the more interesting is the MAY-WEST, as that model shifted from metro exchange to co-location.

.... more network examples...

In the community we have a lot of competing protocols, but here we have LDP, BGP for signaling.  Sometimes this causes a problem for operators.  An operator will choose one, or use both.  The cost is less to decide this earlier in the process.  After an acquisition you can end up with gateways.  This increases the cost of developing, managing protocols. VPLS...  This kind of stuff needs to be considered earlier on.

Then there is more problematic stuff, like pseudowire diagnostics.  In this case there are three types, and the type is negotiated at the time of pseudowire setup.  Some vendors have implemented some, others have implemented others. All can say that they are compliant with the specification, but they do not interoperate.  This delays deployment, even though it gets sorted out over time.

Sometimes we try to do things that are non-obvious.  For our latency sensitive customers, who are pushing out financial market data, lower latency gives a significant advantage.  To put this on an MPLS network, we use fast re-route around fiber cuts, and then run re-optimization every 20 min or so.  The problem is that in this business there can be several ms difference in latency.  The out of order data that results could be transactions.  So there are trade-offs as to what the technology can do.  These applications are "thin" in how well they can handle reordering, so the customers want to be notified before doing re-optimization.  These are just some examples of stuff we worked with.

In summary, I would not say that MPLS is the answer to every problem, but can be used effectively in many networks.


3.  "Vendor perspective - The development of the Technology"
   Kireeti Kompella, Juniper (see slides plenaryt-5)

(Kireeti) A lot of people think of MPLS as label switching, and focus on the data plane, but the control plane is just as important.  That is, the control plane being IP friendly, using IP protocols.  From the suite of protocols we look at, there is also CCAMP, L1/L2 VPNS, PSE, and others.

The next two slides are supposed to give you pause for thought.  The initial goal was to enable faster forwarding, and also to add explicit routing to IP, allow integration of ATM and IP, and eventually have a BGP-free core.

But looking at drivers for deployment, the first deployment was for VPNs and second was traffic engineering.  The third was convergence of different networks: IP, frame, ATM.  Then you saw VPLS.  Then as George pointed out, pushing out to metro:  BGP free core, BGP at the edges, MPLS forwarding in the middle.  The interesting thing here is to contrast the initial reasons for developing MPLS with the reasons of why it is used now.

The lesson to take away is "if you think you have a crystal ball, think again."  The reasons for designing and the reasons for deploying are quite different.  You can see that there is a point around 2001 where we see MPLS used for new applications, for ISPs to make money.  After that it is used to save money (though more efficient use of capacity).

The MPLS architecture is a study in pragmatism.  RFC 3031 talks about how to do forwarding, label hierarchy.  The authors had a fairly concrete idea of how things should work.  Contrast this with other SDOs.

PHP is not fundamental to MPLS; it is more of a clever hack.  The biggest reason for having a payload identifier is for looking inside the payload to see if it is an IP packet, and see if I can do load balancing better.  At the end of the day, we have this whole new protocol suite, but only two new protocols.  Everything else was an extension of existing stuff.

People today say "you are making this protocol do something it was not designed to do."  But "supposed to do" means "does the intended operation fit with the protocol semantics?"  MPLS protocols have a lot of flexibility and extensibility.  If you look at why we did MPLS, and deployed MPLS, the fact that it was flexible was very important. An example of this is the label stack.  That helped us to do fast re-route, VPNs.  We once thought 2 was enough, but now we are happy to have many.  We once thought a label was a short forwarding header, but now we use it for other things since they don't have exact, fixed semantics.

There are lots of variants in some of the functions.  For example, there are lots of options for doing multicast for L3 VPNS.  Some were published, but then deprecated.  Virtual routers never got deployed in a big way.  BGP MPLS VPNs were specified and deployed.  Some stuff never made it to an RFC.



Having competing specs did do a lot for the winning spec.  CRLDP made RSVP into a soft state protocol.  BGP for L2 VPNs pushed LDP to find a discovery protocol.



From an implementation view, implementation was concurrent with spec development.  Deployments were based on drafts. That made vendors talk to each others closely.  There are lots of knobs to tweak.  A lot of that did not get pushed back into the spec.  For example, you had to implement both ways of fast re-route.  Having two standards, or specs, means a lot of work all the way around.  It also gives us a good perspective on "what is the right way of doing things."  It can have one push features into another that would not be there on its own.

We didn't do enough to protect MPLS and the standards that went with it.  A lot of people tried to borrow, tried to "help" with the technology.  I think we can do more in this area.  For example, making big segments of code points as an IETF action instead of first-come first-serve.  You need to plan for success.

In terms of development, being pragmatic, and re-using protocols - if it fits, use it.  If not, create a new one. Implement as you specify, so when it becomes an RFC you know it works.  A single standard some day is good, but sometimes having two along the way is a possibility.  And make it extensible.


Question and Answer Session:

(Andy Malis) Given the experience of last 12 years, what would do different in the MPLS design?

(George) I really don't know the answer to that question.  There were a lot of times where a PID seemed like a good thing, but then other places it would have gotten in the way.  But, at the time, i wanted to put a PID in.  The idea was to have one extra word in the stack - a Protocol ID.  This would have been a full DS byte, rather than 3 bits.

There was early-on talk about putting labels into IGPs instead of LDP, and there are times where certain interactions would have been smoother if we had done that.  And had globally owned label spaces... I have thought about all of those, but not fully thought through all the implications of those changes.  Overall I think it came out pretty good, so I have no regrets.

(Tom) From an operators perspective, the diagnostic tools sometimes significantly trail the control plane and data plane.  That makes things harder.  Having them sooner would help.

(Kireeti) What tom wanted to add was MIBs.  One thing I would change - in LDP we put in a lot of machinery that we don't use today.  Label retention is an example.  There are lots of modes in LDP that complicate the implementation greatly.  At the time it was important to get consensus, but the code is now there and it is hard to extract.  But, we built-in extensibility, not too much mechanism in the code.

(Eric Burger) On the issue of cross-pollination... There was one statement that having two protocols delayed deployment for 18 months.  There was another that it was great that we had two protocols - let them fight it out. We see this a lot in the IETF, and sometimes end up with worst of both worlds.  What do you think?

(Kireeti) Having two protocols is painful all around, for specifiers, implementors, and operators.  It is not a choice we make lightly.  At the same time, to take the king soloman approach and cut things off too early is not good.  Sometimes you have perfect agreement.  I walked into an ssh WG and there was great agreement.  That is not what happened in MPLS.  There were often long lines at the mics in MPLS.  You want to aim for one protocol, but competition is good.

(George) It would be nice if we could figure out how to crack this nut, but it seems like a disease of this industry, how it is organized.  We are all partially guilty.  Just having it aired here is probably good in helping to understand the costs.

(Tom) I was not lobbying that there should *never* be competing protocols, but the decision needs to be in context.

(Kireeti) Someone once said that "the wonderful thing about having standards is that you have so many to choose from." :-)

(Paul Hoffman) Kireeti used the word interoperability.  There seemed to be a race to bug compatibility.  There isn't a lot of MPLS interoperability, between vendors, or ISPs.  You can have two service providers using the same vendor, but you can't get the policies right, and so they run away.  Too many wonderful choices, but without interoperability, is not a success.

(Kireeti) Tom touched on this.  What happened was deployment happened in 2000, there we were playing with early-on implementations.  They were helpful in finding out what worked and didn't work.  Over time, in the 2001 timeframe, you had independent labs doing third party interoperability testing.  The specs were poor.

(George) There were times that people were fixing bugs.  As far as vendor interoperability, there was a lot of that. Carrier to carrier is also fine, but interoperability between service providers is a can of worms.  When the semantics of the label are somewhat loose, it is hard.  If we lived in a less threatened world, with less need for security, we would have more cross vendor interoperability.

(Tom) We drive a lot of interoperability testing privately.  I don't know that we participated in public interoperability testing.  Future function interoperability is also a big issue.

(George) There is nothing better than a vendor helping us here.

(Tom) On the question of "how much interoperability in your networks?"  We have several networks with multiple vendors who interoperate, on the voice side, but also on the data side.  So there is a fair amount of that. Whenever we can we try to use multiple vendors.  We have only a few that are single vendor.

(Kireeti) We work a lot to ensure interoperability, but in the early stages of GMPLS, there was discussion about having a control plane for optical stuff.

(Stuart Bryant) IPv4 is thinking about retiring.  MPLS is a teenager now. Thinking forward, what do you think would cause MPLS to retire?

(George) I expect to retire before MPLS...

(Jim Carlo) As a former chair of IEEE 802, we had your second issue.  A single standard appears the right approach, but often having competing standards is better and gets you there in a more timely manner, even if it causes more work.  We think one is better, but actually having two is better.

(Monique Morrow) What could you have done better?  OAM is a big topic, what could have been done better there, or improved?

(George) Thinking harder about the manageability early on would have made life much easier.  For example, LSP ping to look up and down the stack.  Implementing that turned out to be hard.  We hadn't thought about "if we put something into the forwarding plane, how it got there, who owns it, etc."  Reverse engineering, getting the right information in a timely fashion, was hard.  If we thought about it early on, things would have been easier.  There is a big rush to add features early on.  As a vendor, I wish we had lobbied harder for diagnostic stuff.

(Tom) I would second that.  The drive for more features often comes from the ISPs, so the lack of diagnostic tools is our fault, too.

(Kireeti) We definitely need to know what our circuits are doing, but we have IP networks that we debug (loose sense of the word) with ping.  I think it is good that we did the forwarding part before OAM.  Merging, and ECMP, are the most common ways of building MPLS tunnels today.  There is use of RSVP, but lots of use of LDP.  I think it was right to start from "what behavior do we want", then try and find out what it is doing.  That is opposite of some SDOs, but we also could have done the OAM quicker.  MPLS is perceived as being very complex because of the features, knobs, issues.  To make it more plug and play, and easier to manage, would have been good.

(Andy) The work that we have in COS transport profile is giving us a chance to re-think MPLS OAM and manageability. Input from other SDOs on this is good.

(Olaf) The reason for having this topic on the agenda, looking at it from a protocol success perspective, was to see if there is anything that we can learn that would apply to other work that we are doing in the IETF?  In that very open question, is there a lesson that you learned that might be applicable to other IETF work?  Anything to warn us about?  How to work with IPv6?

(Kireeti) I know little about IPv6.  In trying to learn something about it, it seems there are so many variants of getting 6 and 4 to interoperate.  We have more solutions than implementations, and zero deployments.  I think we need to get one implemented, and work with it until get it right.

(George) The success of MPLS was we had to recognize that IP had won.  But MPLS is very married to IPv4 at this point, and there needs to be a transition of MPLS for IPv6.  But with MPLS we don't have to deal with hosts, so the hard problem we don't have to deal with.

(Tom) From a pure transport perspective, we have some networks that could carry IPv6 traffic.  Layer 2 networks, vBNS supports that to a great extent.  However, we do not have much experience trying to integrate v6 into a large network.  On the command and control aspect there is little work.  We have had interest at times, but the drivers aren't there, so we end up working on something else.





IAB Open Mic Session





(Keith Moore) This came up in a BOF today...does the internet have an architecture anymore? I am not asking that in jest.  I hear the word "architecture" being passed around a lot, but don't think we have that set of shared assumptions any more.  And if you don't believe me, go to the BEHAVE WG, or others.  There are lots of proposals for things that would violate what we used to call the architecture.  Granted, there are things have been deployed that are beyond our control.  But what is the process for recovering a shared set of assumptions?

I am not sure that our organization has a good process for that, especially when we split things up into areas, WGs, etc.  We are bad at that.  We need to be better at defining our arch and maintaining it.

(Barry Lieba) I don't have a complete answer, but perhaps some comments.  The architecture changes over time.  It was different before the WWW.  We just had a BOF on MMOX.  P2P changed the architecture.  Our challenge is to look at how things fit together when they come in, so that it doesn't fragment our efforts.  I am not sure if we are succeeding at that, but that is what we need to do.  I don't think we break things up into WGs in a way that ignores the others.  The IESG, IAB work to understand how things fit together, in a way that makes sense.

(Keith Moore) I do agree there are efforts at the time of WG charter, but then there is a huge amount of effort to go down that path for better or worse.  I don't think the WWW changed the architecture, or P2P.  So I don't think we have a shared set of assumptions anymore.

(Kurtis Linqvist) I kind of agree with Keith, but I am not really sure what we can do about it.  The reason we no longer have the same set of shared assumptions is the Internet is used in many ways.  But the Internet is successful because it allowed for different assumptions, and allows people to develop different things and make money with it. So i agree with the observation, but I can't make up my mind if this is a problem or not.

(Andy Malis) I think it is a good observation, there is lots of evolution.  Mark Townsley told the story that there was a gaming convention in town, and he was talking with some of the younger gamers, who asked "you mean NAT wasn't an original part of the internet?"

People think they know what the internet is, but it is a whole lot bigger than anyone knows.  For example, if you take a core sample, you would find a lot in there that people don't even realize.  It is a huge job to try and keep on top of everything that is going on, but I agree we should try and ensure we keep a core set of assumptions.

(Dave Oran) I think Keith is accurate.  All of us struggle with how much of the original properties we can recover. In the early days there were hopes.  But the "council of cardinals" approach, the IETF decided explicitly to move away from that, and there were consequences of that decision.

Being on the IAB, sometimes I feel like the priest in a confessional.  Someone comes and tells me they created a NAT, and all I can say is "don't do it again" and have them say some "hail mary's".

(Scott Brim) I think Keith is lucky to live in interesting times.  We are on the cusp right now of big turmoil with NAT, loc-id sep.  We are trying to coordinate ,like in the MIF BOF.  But they are just interesting times, and the architecture is fluid right now, more fluid than it used to be.  I agree with something Tony Hain said - "we have to be careful that the stuff we put in to adapt, while we figure out principles, that we can take out later once we solved the basic principles."

(Wes Hardaker) I think the problem is that the deployed architecture is not what we want.  No one designed that.  We lately have been spending more time with reactive health, rather than proactive.  We have 15 illnesses, like NAT. We need to figure out how to advance to that "well being" stage again.  We used to have an arch, but right now we are dealing with this stuff.

(Stuart) Within the IETF, we do a pretty good job but we are not the only ones influencing the internet.  We had an open network, and then got firewalls, VPNs, iphone.  And the device in your hand is connected to two internets that are not quite the same.  Nobody here has directly decided that, and we at the IETF cannot tell companies what to do. The best we can do is try and set the incentives, guide, but this is very subtle.

(Hannes Tschofenig) In the NOMCOM this year they asked the community for feedback on the IAB.  There was confusion about what the IAB does, and Joel said to ask the IAB members?  Do you have other ideas of how to inform us of what you are doing?

(Olaf) One of first and most important things is that we are clear on what we decide in our meetings.  Having good sets of minutes available is #1.  Also, these plenary reports are telling you what we do.  Normally, we never hear many questions about these reports.  Having said that, we have been discussing how to improve that information flow in a more timely fashion.  But we are not quite sure what form we should use to do that?  What form do you think?

(Leslie Daigle) I would like to observe that one of the reasons the IESG role is clearer is that the IESG is involved directly in the operational issues, the day-to-day work of many IETF participants.  The role of the IAB is not like that, and that is on purpose.  When the IAB does get involved in operational issues there is usual push-back.  So here is always going to be some distance and lack of clarity of exactly what the IAB is doing.  I think you could make it clear that architecture is clear and separate.

(Brian Carpenter) It is interesting that two former chairs (Leslie and Brian) are trying to answer this question. When you leave these committees there is a sort of sensory deprivation for a while.  I think there is not good answer to your question: the whole IETF can't come to an IAB meeting, but then we don't know what you are doing. The best thing we can do is to see IAB members be very active in WGs and BOFs.

(Dave Thaler) On the question of how we communicate today, it is a combination of push and pull.  There is Olaf's report at the plenary, articles in the IETF journal, and publishing notes and reports.  There is also participation in some WGs and BOFs, IAB thoughts on IPv6 NAT, etc.  Then there are pull-based mechanisms like the meting minutes. The things we think are most important we push.  Can it be better?  Maybe.

(Olaf) It is hardly ever that during a BOF that "the IAB opinion" is given.  But there are IAB members there. Although it is an IAB responsibility to be involved, it is hard to gel to one voice all the time, so it is individual participation that is visible.

(Dave Oran) It is important to understand, when you see IAB members moving around, it may look like individuals, maybe not looking for formal IAB position
- but they are there participating because of their IAB role
- otherwise, many of these would be a lower priority for them as an individual

(Danny McPherson) Much of the work the IAB does is not that sexy or exciting.  There is the RFC editor, liaison relationships, BOFs.  There is some architectural stuff as well, but that is one thing that frustrated me when I came on the board.  I remember thinking, e.g. "we need to say something about v6!", but it is hard to have a clean, strong opinion, because there are many factors.  There is a huge amount of stuff the IAB does that is not sexy, for example, some of the stuff in the reports that Olaf mentioned.

(Andy) We have an email list that anyone can send mail to, iab@iab.org.  That goes to all IAB members.  Please do not hesitate to send comments and suggestions.

(Paul Matthews) I have never been on the IAB, but the perception of an outsider..  I have heard about all these things that happened behind the scenes.  The view is that there are people on the IAB acting as individuals, but not much of the view of "a body".  Even when there is an IAB document, it is often the work of one or two individuals, not a group that has sat down and thought about this hard.  More of that would raise the influence of the IAB.

(Barry) Any doc that comes out as an IAB doc has one or two editors.  Rest assured, we all spent a lot of time working on the doc.  You can't judge solely by the names on the top.  A particular doc may have more primary contributors, but everyone has reviewed, discussed, etc.

(Gregory) For example, on the NAT 66 doc we had over 5.5 hours of discussion time in the past few months.  That is not even including reviews of drafts, email discussion, etc.  One of the reasons these do not get reflected outside that much is that these are hard topics.  We have a good relationship within the IAB, but there are very different opinions and perspectives on how a topic should be addressed.  It is hard to not be too prescriptive, but still be helpful, to not claim work that should be in a WG, and find a common voice.  And we put something out, and then wonder if what we put out was good enough.

(John Leslie) You could take a different approach.  Every time I see an IAB statement, I know it took a lot of effort, and think that it may accomplish little.  I would rather see docs from a smaller subset of members at times.

(Gregory) It is easy to look at the levers that the IAB has available:  we can release a doc, host a tech plenary. But was an architectural difference observed?  Is host reachability any better?  These are longer term measurements, but that is what we have been asking. For example, the shirts this time around...  We can we push and nudge in different ways.

The suggestion that the docs the IAB puts out have little impact - I don't agree.  They are often referenced.  If the community has a shared idea of what is good or bad, the IAB writes that stuff down so that the principles are not forgotten.

(Tony Li) I have been observing the internet since 1982.  One thing is sure, it evolves, grows, in ways that we cannot have control over it.  The IAB has no chance to control it.  The best it can do is bump things this way, kill off a bug, but the evolution will keep it moving.

(Olaf) I whole-heartedly agree.

(Dave Crocker) You are an interesting body of expertise and availability, but I think your time is taken up by too much "daily stuff".  It is useful stuff, but it can consume you.  Every time the IAB takes the initiative to write a paper on its own i think that is good.  I would encourage you to think about tasks that are consuming a lot of time, that are real work, but that do not have to be done just by the IAB, in order to free up time for stuff that needs to be done by the IAB.

(Hannes Tschofenig)  It looks like we have had a couple of those BOFs in the past where the IAB gave presentation. I like the documents you publish, but I would not see that as so tremendously useful.  I would like to see more of things like following: Jari helping with the SAVI BOF, where some folks came up with ideas, and needed help structuring the work and getting it into the IETF.  That would have been good for IAB members to do.  Another example is the stuff Jon and Cullen did with the peer-to-peer community, working with people outside the IETF.

(Christian Vogt) The impact of the IAB work has a lot to do with how specific it is.  If there is BOF, and the IAB has impact, it can be great (e.g. NAT66 BOF).  Things that are more general have less impact, but are very important (e.g. the IP model paper - this is one of the most important documents ever from the IAB.  I think it is the WG chairs who need to take this IAB guidance and put it into action.  So, IAB documents can have an impact.

(Jari) In response to Hannes - in all the times we charter new work, the IAB has done a lot of work in each of those.  For that I thank the IAB, it has been necessary to make those WGs happen, but that is sometimes a hidden effort.  There is a lot of work in the background between the IAB and IESG members.  Maybe some of the reports from the BOFs could go into the public view.  Not all of the critique would be appropriate, such as opinions on how people manage a BOF, but some of this might be good for wider publication.

(Gregory) That is an interesting thing for us to consider.  We should discuss how to do that...

(George Swallow) Around 10 years ago Steve Deering held a routing seminar on the future of routing.  The seminar was not necessarily seen as a great success, but I think that was one of the great things that the IAB has done.  And I got a much deeper understanding of the issues they were struggling with, interplay with people outside, etc  I would like to see more of that.

(Alain Durand) I would like to talk about IPv3.5.  There were a lot of activities this week to extend the life of IPv4.  Those are necessary, but the fact remains that there are major changes to the underlying assumptions - e.g. an IPv4 address used to represent an end point or user.  That is not necessarily the case anymore.  Many changes result from this (NATs, NATs behind NATs, etc).  There are tons of changes to how to do intercept, logging, etc. What is the IAB take on this?  A document saying your sets of assumptions have changed.. how to adapt.. consuming too many ports considered harmful - would be good.  I think that IAB docs have a lot of weight outside.

(Stuart) For everyone's information, Alain referred to IPv3.5.  We have been talking a lot about how IPv4 evolves with NATs, etc.  Some people have been calling this IPv4.5, but that implies the architecture is getting better.  In reality, it is more like IPv3.5, since the architecture is actually worse.

(Alain) What are you guys going to do about this?

(Vijay) Eventually you build this house of cards... I want to echo what Tony Li said.  We (the IAB) can write a paper, but people do not act based on documents,  but on spreadsheets.  I think in the long-run this is a self correcting problem.

(Olaf) It is very hard to o the right thing.  Just as this community is having this debate, we are also internally having this debate.  The community is reflected within the IAB.

(James Woodyat) If you are struggling with making a consensus statement, it might be worth a minority report.

(Olaf) What we often end up with is a set of considerations.

(Danny) The IAB spends a lot of time figuring out what to say, and if to say something at all.  The benefit of that is that the when the IAB does make a statement it is well thought out, and typically stands the test of time.  That is something important to remember, and something I have come to value.  We can all make individual statements, but the value is in the consensus.

(Andy) There have been several occasions where the IAB tried to push a particular viewpoint on a technology, and the rest of the IETF pushed back.  That is not our intention.

(Leslie) I want to amplify what Danny said.  even if you can't change the world by writing a document, if you can write a document that is reflective of the collective intelligence of the IETF, then others might (change the world).

(Dave Crocker) It is ok if we (the IETF) don't always agree with you.

(Olaf) It is more that we want to love the document ourselves.  It should be an *iab* document, an IAB piece of work, not just 5 people in the hall who wrote something.

(Dave Crocker) i was responding to the perceived fear of push back from the community

(Olaf) it is more the case that because you are reflected in us, the minority is reflected here

(Barry) We have been reluctant to publish because we were not happy with the document yet, but not because we were afraid you wouldn't like what we had to say.

(Thomas Narten) I think you are getting the balance pretty much right.  There have been the same conversations for years, and overall the balance is about right.  The trade-offs are hard.  We all know a "no NAT" document doesn't work.

(Gregory) To Alain, the position we are trying to take in lists, BOFs, etc related to IPv4-IPv6 topics is that there needs to be transition mechanisms.  We may need translation.  And try to ensure that the use cases are clear and validated.  Jari and Mark's paper is good start.  Updating that doc will help the various groups working on this, to see if a particular proposal will address the problems.

If there is only one way to solve a use cases, and it deteriorates the end-to-end, then we are falling back to the UNSAF - try to set a time limit, a way to back out.  Some of this is out in the open, some is behind the scenes.

(Alain) My request was not for a "NAT is bad" document.  I was suggesting explaining the consequences of where the architecture is going, explaining that to the community.  The assumptions about the architecture...

(David Black) The IAB need not wait until it thinks it knows the answer.  It is sometimes fine to just frame the debate, even without answers.

(Eric Rescorla) I was on the IAB for 6 years, and I heard these same questions over and over again.  If this is the best we can come up, we might as well go home.


(These notes were taken by Dow Street and Mirjam Kuehne, and reviewed for correctness by the panelists and the IAB. Please send any additional corrections to execd@iab.org).

Slides

IAB Agenda
IRTF Status
IAB Report
MPLS Becoming a Teenager (moderator)
A Brief History of MPLS (George Swallow)
Vendor Perspective (Kireeti Kompella)
Operator Perspective (Tom Bechly)