An IETF meeting is a busy time for the Area Directors. We do not have much time for discussing IETF-wide topics or getting to know new team members. Every year we meet for a couple of days as a team, outside the IETF meeting. This year we met in London, partially interleaved with the IAB meeting that also took place in the same location.
This article is a summary of the discussions at the retreat. The main topics were how the IESG could work better, how to make sure the IETF community is working most effectively, how and when to introduce the new RFC format, and the progress so far on addressing pervasive monitoring. And, there were a number of other important topics discussed about how to make sure the IETF is working well today, and into the future.
We spend the retreats not so much with the day-to-day document or working group issues, but rather try to ask higher-level questions about our organisation and how we work at the IETF. This year we started off with the ongoing re-organisation of areas. With the new ADs in place, we are now ready to formally switch to the new area structure. The biggest change is the creation of the Applications and Real-Time (ART) Area with three ADs. Revision of BCP 67 defining the DISPATCH process will be submitted, desirable qualifications for next year’s IESG selections for the nomcom are being worked on, and IETF tools are ready for the new area. I expect to see a formal decision for the creation of the ART area in the next IESG telechat. The primary reason for the creation of this new area is that it allows us to be more flexible in taking care of rapidly changing topics in this domain, without being too constrained by the area structure.
But that change is just organisation. It is good to have organisation match the kind of work we do, but there are even more important things. For instance, ensuring that we work in the most efficient way, and that work happens where it most naturally fits in. The IESG has discussed “smallerising” its workload. The idea is that if AD tasks were not as full-time as they are today, good things would follow. For instance, many more people could consider taking on the AD role. But the crux in work load reduction is that AD’s work consists of many small things that together take a lot of time. It is difficult to find a single task with significant savings potential. Nevertheless, several ADs are planning to run an experiment in their areas around document review tasks.
But how the IETF community as a whole works is even more important than the IESG’s workload. The community is where the real work happens! One of the trends that we see is a more prominent role of open source efforts. The IESG noted the early good experiences from the Hackathon, and we plan to continue and grow these events. We also noted good experiences in focusing IETF work related to data models better. A lot already happens with data models at the IETF, but there’s plenty to do to make open source and IETF worlds work even better together. As an example, at the Hackathon, Benoit Claise worked on tools to integrate checking and I-D generation of YANG models. When data models can be easily moved to different formats or checked against each other, producing high-quality models becomes easier for everybody. We are spreading the word about the upcoming Hackathon in Prague, planning the next year, considering whether it would be possible to invite students from local universities to these events, and so on. If you have further ideas around the Hackathon, let us know. And do sign up, as we will still have limited number of seats in Prague for this.
Similarly, in the past year, the IESG has discussed how the increasingly common document repository collaboration model with Github and other popular tools has been successfully used in several different Working Groups. At the retreat, the IESG discussed both having a wiki that contains descriptions of tools that have been successfully used and the details of how to set them up properly for a Working Group so that WG chairs can easily investigate using one. As part of this, the IESG discussed the work necessary to clarify how to apply the IETF Note Well and various IPR notices to such tools.
Heather Flanagan, the RFC Series Editor, also participated our retreat. Her focus has been the introduction of new RFC formats. The question for the IESG is determining the how and when to introduce those formats into the IETF process. We’ve already asked our tools team to do the first easy step of accepting drafts in XML format and not requiring a text version as well; of course, a text version will always remain acceptable. Two advantages from the new RFC formats are the ability to accurately include authors’ names, regardless of character set, and the ability to include diagrams in SVG. We’re planning to take the first Internet Draft with non-ASCII author names through the process soon, and proceed to diagrams once that has been successfully completed. Of course, there are still a few questions around determining which display versions of a draft are reviewed when and how to tell if there are substantive differences between the different display versions. Similarly, there will be additional tools work needed; for instance, the ability to compare XML and text versions, if both are submitted, and verify that the differences aren’t an issue (e.g., copyright changes) still lies ahead.
One of the actual technical topics in the retreat was a review of where are we with pervasive monitoring and defences against it, two years after the Snowden revelations. While improving Internet security is hard and slow, a significant amount of work has gone on in this space during those two years. Our first technical plenary on the topic, the STRINT workshop, RFC 7258, continuation of our work on improved and more easily usable security in TLS 1.3 and HTTP/2, chartering the UTA, TCPINC, and DPRIVE working groups, deprecation of RC4 and SSLv3, the introduction of ChaCha20 and Poly1035, the RFC on opportunistic security, the new IAB program on privacy and security, and countless examples of working groups carefully analysing their technology in light of possible privacy dangers. And at the same time, the world seems to care more and more about security, and is employing secure protocols in an increasing fashion. Pervasive monitoring continues to be a significant concern, and communications security is just a part of the defences. But the IETF is clearly committed to doing what it can to improve the technology.
We also talked with Greg Wood, ISOC public relations expert, about information that different interested parties need from the IETF, in view of the Website renewal project and other efforts.
The IESG responded to concerns that the datatracker sent too many e-mails to too many people on too many draft state transitions, by working with Robert Sparks and the tools team in reviewing what draft state events should lead to e-mails being sent, and where they should be sent.
Ray Pelletier talked about services for remote attendees, the anticipated growth of this service, and need to integrate effectively and efficiently into the participation of the WG sessions.
We also talk about IETF educational efforts. The EDU team has told us that they need new members and that this is also a good time to re-assess how the team works and what are the highest priorities for the work. Martin Stiemerling will be participating the EDU team from the IESG, but again it would be good to have volunteers from the community! We are planning to run a session at IETF 93 about this so that the community can weigh in on priorities and we can brainstorm about the best ways of running IETF EDU sessions. My personal thought is that the world is changing and maybe our focus should be on building a great YouTube library of IETF educational materials. This would fit a perhaps more growing remote participation, and enable people to learn about the IETF in more targeted ways, say, for someone who needs to come in to develop a specific piece of technology. This would imply getting excellent tutorials shot once rather than necessarily run repeatedly. It would also imply more work on organising the library of information and access to it. If you have any thoughts on this, let us know!
Finally, the IESG discussed situations relating to authorship of documents. The RFC Series Editor has made a statement about expectations relating to authors of documents. An additional case that we discussed in the IESG was that we’ve seen a few cases where it has been a surprise for people that they have been added as co-authors in documents. Please work with your co-authors to ensure that everyone is on board with being an author (and aware of all the responsibilities involved in being an author). An IESG statement on this topic will appear soon.
Overall, this was a very useful meeting for the IESG, and will be followed by a series of new arrangements. In London, we were hosted by Ted Hardie and Google, and I would like to thank them for their hospitality!
Jari Arkko, IETF Chair
The first flow-related BoF (birds of a feather) took place in London in summer 2001 during the IETF meeting 51. A few months later, the IP Flow Information Export (IPFIX) working group (WG) was created, with the following goal in its charter: “This group will select a protocol by which IP flow information can be transferred in a timely fashion from an “exporter” to a collection station or stations and define an architecture which employs it. The protocol must run over an IETF approved congestion-aware transport protocol such as TCP or SCTP”. The charter planned for three deliverables: the requirements, the architecture, and the data model. At that time, I was told the intent was to standardize NetFlow, a proprietary implementation, which was already deployed in operator networks.
And so, it started.
The WG debated for a long time on the requirements for the future IPFIX protocol selection, and hence the IPFIX architecture. There were five candidate protocols, with different capabilities, to select from, and each candidate proponents were obviously pushing their own protocol.
From there, the chairs decided that the WG should classify all requirements as “must”, “should”, “may”, and “don’t care”. The “Requirements for IPFIX”, RFC3917 documented this outcome. An independent team, in charge of evaluating the different protocols in light of documented requirements concluded that the goals of the IPFIX WG charter would best be served by starting with NetFlow v9, documented in the mean time in the informational RFC 3954.
By that time, 3 years have passed.
The next couple of years were dedicated to IPFIX protocol specifications. According to my recollection, the WG spend one year or maybe one year and half on transport-related discussions: should we use TCP or SCTP as the congestion-aware transport protocol … while most customers only cared about UDP when the flow export collection is exclusively within their management domain … and where the distributed function of forwarding ASICs complicate congestion aware transport-requirements.
The final specifications compromise on: “SCTP [RFC4960] using the PR-SCTP extension specified in [RFC3758] MUST be implemented by all compliant implementations. UDP [UDP] MAY also be implemented by compliant implementations. TCP [TCP] MAY also be implemented by compliant implementations.”
The IPFIX protocol (RFC 5101) and the IPFIX information model (RFC 5102) were finally published in January 2008 as Proposed Standards. In the end, IPFIX is an improved NetFlow v9 protocol with extra features and requirements such as transport, string variable length encoding, security, or template withdrawal message.
The IPFIX WG closed in 2015, with the following results:
– the IPFIX protocol and information model, respectively RFC 7011 and RFC 7012, published as Internet Standards
– a series of RFCs regarding IPFIX mediation functions (from a subsequent charter)
– almost 30 IPFIX RFCs in total (architecture, protocol extensions, implementation guidelines, applicability, MIB modules, YANG module, etc…)
– the IPFIX community worked on PSAMP (Packet SAMPling), which selected IPFIX to export packet sampling information, and produced four RFCs.
What are the lessons learned from those 13 years of standardization?
– 7 years (or 13 years depending if you consider the Proposed Standards, or the IPFIX WG closure) of standardization is way too long, and is inadequate today, at the age of quick(er) opensource developments. This IESG (Internet Engineering Steering Group) discussed this issue during a retreat, recognized the issue, and stressed that “WGs should have solution work from day 1”, as explained by the IETF chair in his report at IETF 90. So basically, let’s not spend precious WG time on use cases and requirements, unless we have to.
– If the intent behind the charter is to standardize a specific solution (citing the OPS AD at that time, “the intent was to standardize NetFlow”), then the IESG and the charter should be clear about it. For example,”The WG should consider <draft-bla> as a starting point.”
– <Area Director hat on> Now that I’m part of the IESG, I can tell: don’t fight the IESG regarding transport protocols. If the protocol will ever run over the Internet, it must run a congestion-aware transport. Full stop. </Area Director hat on>.
<Benoit hat on>In the end, the operators will do what they want anyway, and request what they need from equipment vendors.</ Benoit hat on>
Maybe the only important question is: IPFIX is a success?
– From a specification point of view, yes. Granted, should we start from scratch today, we would change a few things, reflecting on years of experience.
– From an implementation point of view, yes.
– From a deployment point of view, not quite yet but getting there. Indeed it took so long for the IPFIX to standardize that the NetFlow v9 implementations improved along the years. The world will see more IPFIX deployments when operators will require IPFIX specific features … And there are not many SCTP IPFIX requests right now. 😉
In conclusion, we can say that IPFIX is a success, but the world has changed a lot since 2001, lessons have been learned, and today we approach standards in the IETF differently.
Regards, Benoit (Operations and Management Area Director)
On the weekend before the IETF meeting in Prague (July 18-19), we will hold our second IETF Hackathon event at the Hilton Prague. Improve the Internet with new software, new networking technology, or further developments of open source platforms. With your colleagues and partners!
Sign up for the event at registration page! Participation is free, but we have space only for 100 developers.
The strength of open source and other similar efforts is working with others. The Hackathon is an opportunity to work with others behind new software projects and the technologies themselves. What would you like to work on? HTTP/2 implementations? Internet of Things? Routing? Video codecs? Privacy-aware DHCP clients? Add your suggested topic to the wiki and gather a group to work on it! Or ask your working group if the developers would be interested in some joint development or testing.
Or if you are already working on a project, the Hackathon is an opportunity to bring your team together for intense two days.
You can also just sign up and decide at the event which one of the projects sounds most interesting. I personally found that the mixture of different people and technologies was the most useful thing in our previous Hackathon, so consider also joining an effort that you have not previously worked on.
I would like to thank Cisco DevNet for sponsoring this event. Credits for the above photo go to Olaf Kolkman.
For more information, see the Hacktathon home page and join the mailing list.
Jari Arkko, IETF Chair
This is a brief report from a meeting between a number of Internet organisations that took place last week in London. You may have heard these meetings being referred to as I* CEO meetings, somewhat inaccurately as, for instance, the regex doesn’t match the actual organisations. From the IESG the participants were Barry Leiba and myself, and Andrew Sullivan and Ted Hardie participated from the IAB. There were other participants from the RIRs, ICANN, ISOC, W3C, and ccTLDs.
The purpose of these meetings is informal information sharing. Occasionally they allow ‘light’ coordination as well, for instance, when one organisation leads an effort and the rest of us can get involved rather than setting up similar efforts ourselves. The meetings do not make decisions, as obviously the processes in our different communities are in charge of that.
Everyone in the meeting highlighted how the strength of the organisations work around the Internet is in their distributed nature – just like with the Internet itself. This ensures stability and that the different communities are served in the ways that they need.
The ongoing effort in transitioning the role US government away from the stewardship of the IANA functions was a key discussion topic in the meeting. The participants highlighted how important the role of the communities is in this process. All the affected organisations have set up processes that are fundamentally about the communities deciding their paths in the the transition. The same is true of the process from IANA transition coordination group (ICG). The IANA arrangements depend on the support of the communities, and their ability to define those arrangements.
The participants thanked the communities that have risen to this challenge, and expressed our commitment to completing the task.
My personal opinion is that while IANA is important and the stewardship transition is significant, at the same time we should put things in perspective. IANA services as provided today are working very well. We at the IETF have always worked on the continued evolution of the IANA services, and the transition is “just” an additional step in that evolution. Furthermore, the IANA services are clerical, and for the case of the IETF and RIRs, any changes resulting from the transition are minor. The system continues to run as it has done for decades, which is good. Boring and uneventful is good. We should treat this as business as usual, and avoid adding aspects that have little to do with the clerical work or its oversight.
My opinion is also that we at the IETF are largely ready and can proceed with remaining parts. One of our next steps is yet another set of yearly changes to our existing agreements. These changes have not yet been executed [1,2,3] but eventually will. Wearing my engineer hat, I think the overall transition effort should be thought of as a project rather than an instantaneous change at some point in time. In any case, we’ll get there, and everybody in the meeting re-affirmed their commitment to getting the community plans about the transition executed.
The meeting also discussed a number of other topics, including the continued growth of mobile networks as the most common form of accessing the Internet. Just last year, 800 million smartphone subscriptions were added and the growth is accelerating. The meeting discussed the fast pace of evolution in web technology (such as with HTTP/2) and the Internet of Things, open source and other collaborative methods of developing technology, and efforts around improving security and privacy. The meeting also highlighted the importance of having an open Internet in light of the fast evolution and growth. The ability to freely create new innovations and services on top of the Internet is a key to its utility for the world.
Other participants from the meeting have provided additional perspectives, for instance, Andrew here, the NRO here, and APNIC here.
Jari Arkko, IETF Chair
Every year, IETF’s leadership groups (IAB, IESG, IAOC) meet for retreats. This year all the three groups meet in London. The IESG meeting is just starting up now on Monday morning, we’ll continue with IAB meetings during the week as well, and the IAOC already met a few days ago. The topics on the agendas include next steps with our reorganisation, review of our efforts around improving privacy of the Internet, RFC formats, IANA transition, future meeting locations, and many others. As the meetings conclude, we will be providing a report of our main conclusions.
Jari Arkko, IETF Chair