5G and Internet Technology

5G is the latest generation of cellular network standards. There’s a tremendous amount of activity around it in the industry. But how does 5G relate to Internet technology? Are there 5G-related work items that the IETF should be working on, for instance?

While at times the 5G stories take on an almost myth-like nature, the basics underneath 5G are concrete changes in the technology and our increasing needs for communication. The traffic growth for both our smartphones and homes continues to be exponential. And as organisations and societies increasingly connect their systems, there are also many new needs.

5G responds to these needs with new radio technology and a core network that employs state-of-the-art network technologies such as an increased use of cloud, virtualisation, and open source components and processes. From a standards perspective, the timelines for the first systems are very near. The 5G work happens to a large extent in 3GPP, as did previous generations. The work on 5G is planned to take place in two releases, of which the first one is Release 15, scheduled to be stable and all protocols completed latest by September 2018, just 14 months away. Additional work will be done in Release 16, which will complete by March 2020.

What is 5G?

But what exactly is 5G then? First, it is a new, very capable radio. With beamforming, MIMO antenna technology, and frequency bands reaching to millimeter waves, it provides both higher transmission speeds and serves more users at the same time. The new radio is also needed to serve mass deployment of networked sensors, and to enable various mission-critical services that may require better latency or reliability characteristics. 5G radio can provide speeds in the Gigabit range, up to 10 Gbps or even beyond, although for large numbers of users the speeds are lower, but still target at least tens of megabits per user, for tens of thousands of users.

Second, 5G targets a set of use cases, such as the familiar mobile broadband use case. But 5G is also intended to open the use of communication and cellular networks for many new industries. The goal is to be able to tailor communication platform for a wide range of different services, ranging from low power IoT devices to self-driving
cars, from mission critical public safety communication to providing services to energy providers. any of these use cases were hard to provide with previous generation technologies. For instance, one use case is about controlling remote machinery — an example of a service that benefits from lower latencies that 5G provides. There is also a higher demand on flexibility and configurability/orchestration.

From an IP networking perspective, as noted, 5G follows the same evolution as the rest of the networking industry. From a practical perspective, this is a big change, however, and requires effort. The work on details is ongoing for Release 15, but architecturally, the key directions are clear.

To give a few practical examples, interfaces to the devices are relatively similar to those in 4G. One difference is an ability to place different devices in different virtual networks or “slices”, which can be tuned and evolve independently from each other, both in resources and networking technology behind. Another difference to 4G is that the 3GPP security group plans to enable a more flexible authentication framework for the devices. And some of the control protocols inside the network may be changed from DIAMETER-based ones to REST-based APIs. From what we understand, the tunneling-based architecture for mobility is not changing, but of course with most services being provided in virtualised environments, the tunnel endpoints may physically reside in different places.

It should also be said that 5G is not a replacement for Internet-based services or Internet technology: majority of the traffic that 5G will carry is for the usual Internet services, like videos from content providers. 5G is also not immune to impacts from Internet evolution. For instance, we’ve seen big changes in use of encryption in the Internet, transport protocols are evolving, the use of CDN systems is growing, and all networks are becoming virtualised, software-defined, and cloud-resident systems. 5G networks need to serve the Internet that continues to evolve in this manner.

Is there an IETF connection?

It is useful to understand how 5G affects Internet technology. IETF work has been and will be affected by 5G. To begin with, the IETF works on many of the general facilities that modern networked systems such as 5G are based on.

Conceptually, one can think of the interactions as falling in the following categories:

  • New dependencies on existing IETF technology. For instance, the flexible authentication framework mentioned above is EAP (RFC 3748, RFC 5448). This is likely to be merely a reference to existing RFCs, or if additions are needed, they are small.
  • Dependencies to ongoing work at the IETF. This includes various general facilities as noted above, but also other things. For instance, the IETF DETNET working group defines mechanisms to guarantee deterministic delays for some flows across a network. As one of the 5G use cases is time-critical communication and low-latency applications, this is a component technology that is being looked at. Similarly, IETF routing-related work such as traffic engineering, service chaining and source routing are likely tools in managing traffic flows in 5G networks.
  • Topics where there is clear demand for a feature, but it is unclear whether changes to Internet technology are needed, or the details remain to be determined. For instance, in the upcoming IETF meeting in Prague, we will be discussing whether some additional support is needed for what is in 5G called Network Slicing. There are many IETF tools, however, for dealing with virtualisation and separation of networks, so first order of business is probably mapping what can be done with those tools.
  • Larger, architectural changes, e.g., “future Internet” type solutions such as ICN (Information Centric Networking) are sometimes suggested also in the context of 5G. While these are perhaps unlikely in the first release of 5G, it is of course certain that the evolution of the Internet continues (and there will be future releases of 5G standards as well).

Going Forward

We asked Gonzalo Camarillo and Georg Mayer (liaisons between 3GPP and IETF) about collaboration between IETF and 3GPP. They said that our best approach is to ensure that the 3GPP engineers are involved in the IETF work they are interested in. And that 3GPP states clearly what their requirements (rather than solutions) are. They also noted that the work in 3GPP is ongoing. Hence completing protocol requirements for 5G will still take some time. Gonzalo and Georg will be contacting the relevant parties on both sides to keep us in sync.

Exchange of information would also benefit from informal collaboration, for instance through Internet technology experts working with the 3GPP community. This enables common topics to be easily discussed and brought forward.

We should also note that there are clear boundaries between the two organisations. The IETF works on Internet technologies which may or may not get used in different networks. 3GPP puts together systems, architectures, and designs protocols specific to their networks and layers. The IETF is not in charge of making system level or requirement decisions for the 3GPP. Similarly, 3GPP leaves the evolution of Internet protocols to the IETF.

Also, recently Alissa Cooper, Chair of the IETF, visited a 3GPP meeting. Her report is here.

Finally, it should be noted that many of the existing tussles in the Internet continue to exist with 5G. For instance, the ability to provide a highly dynamic and programmable radio environment continues to present opportunities for collaboration between networks and applications. Such collaboration is not something that has historically been easy in the Internet, however. When we discussed this as a part of the growing use of encryption, the necessary changes to network management practices due to the encryption changes caused pain for operators. Perhaps as some time has passed, and networks continue to evolve, we could consider network – application collaboration as an opportunity and ask what useful things networks can do for applications?

Jari Arkko, Ericsson, IAB Member
Jeff Tantsura, Consultant, IAB member
Image credits: Ericsson, 3GPP, IETF, and Wikipedia (Peter K Burian).

What does “Internet Access” mean?

On the joint day of the the recent IESG and IAB retreats, the group discussed a number of topics related to network operator activities for encrypted flows. As part of that conversation, the group looked at RFC 4084, which tackled the question what “Internet Access” means. A dozen years on, that subject probably deserves a new look, and several of the folks at the retreat agreed to draft a new version for community review.

As one of those volunteers, I’d like to dive into RFC 4084 a bit and explore what may have changed since it was published. After walking through the need to avoid pejorative terms, the RFC sets out the following types of connectivity: web connectivity; client connectivity only with no public address; client connectivity only with a public address; firewalled Internet connectivity; and full Internet connectivity.

For those who have bought enterprise connectivity recently, it’s obvious that several common categories are missing: dark fiber, lit service connectivity to a home office, managed MPLS tunnels, and so on. More importantly, though, the RFC doesn’t really touch on cellular wireless connectivity at all, which is now one of the most common ways people connect to the Internet. That means that it doesn’t touch on topics like data caps, roaming for data services, zero rating, or data compression proxies. For cellular connectivity, those can be the key to understanding the trade-offs in connectivity, privacy, and costs for a particular service offering.

Beyond that proliferation in available offerings, there has been another major change, in the ubiquity of filtering. RFC 4084 describes filtering at the ISP level in section 3 and notes “the effort to control or limit objectionable network traffic has led to additional restrictions on the behavior and capabilities of internet services”. RFC 7754 has since provided a much more detailed description of blocking and filtering, and it highlights restricting objectionable content as a category beyond blocking objectionable traffic. That blocking may be a requirement imposed by state regulators. In those jurisdictions, what RFC 4084 described as “full Internet connectivity” has disappeared, because service providers are required to prevent their customers from reaching specific Internet resources, services, or destinations. Even where blocks are not in place, regulatory increases in the amount of Internet tracking data retained and the length of time it is kept have become common. These may contribute to self-censorship in the use of some content. Put simply, firewalled Internet connectivity has become the default offering required of service providers within those territories.

Lastly, the document describes Internet connectivity in terms that apply to the services which would be consumed by a human user and, though some social networking or streaming services are not included, it is generally useful in that regard. As we move into an era in which devices talk to other devices, we also need to examine what a service provides for traffic among devices or between devices and back-end services. Is the implication of a web-only service that the Internet of Things is not supported, or is the implication that it must be reached by a web-based gateway or proxy? The difference between those two is a serious topic of contemplation now, and the architecture of a number of services will depend on it.

In many cases, the architecture of the Internet has developed in the course of a commercial dialog between network operators’ offerings and consumers’ use. Many efforts to make cellular systems walled gardens failed, for example, because the users simply weren’t willing to use them that way and wanted the broader connectivity of the Internet. As we look at this new tension among users’ desires for confidential communication, network operators’ management practices, and regulatory frameworks, a common vocabulary for the services available to the user may help us understand what architectures we can build. If you’d like to contribute to the early discussion, architecture-discuss@iab.org is one place to start.

Ted Hardie
IAB

Working Together with 3GPP on 5G

3GPP Meeting June 2017Last week I had the opportunity to participate at the 3GPP plenary meeting in West Palm Beach, Florida, USA, at the invitation of the 3GPP liaison to the IETF, Georg Mayer. In addition to attending meetings of 3GPP’s radio access network group and system architecture group, I had the chance to kick off their new “Wednesday Speaker Club” series with a discussion of how 3GPP and the IETF can cooperate on 5G standardization.

The push towards the next generation of wireless networking technology has been gaining increasing attention and spurring new work across the industry, SDOs, and open source projects. 3GPP participants are investing tremendous effort to define and prioritize 5G requirements to help bring this technology to fruition. They are also working against very tight timelines, with the initial set of 5G standards due to be completed by June 2018. It is therefore both timely and important to identify whether dependencies between 5G and IETF work exist, as well as to identify mechanisms to ensure smooth collaboration.

The IETF and 3GPP have a long history of working together and many successes to build on, including our experience with SIP/IMS, EAP-AKA, and Diameter. Because 5G encompasses a broader swath of folks than those who have been involved in previous joint efforts, I spent part of my time at the meeting introducing how the IETF works, our focus on broadly deployable internet technology, and what we work on. I highlighted some areas of existing IETF work that may be of relevance in the 5G context, including our work on data models, service chaining, deterministic networking, and QUIC (look for more details on these areas in a forthcoming blog post). And I engaged with 3GPP participants around specific strategies to help our two organizations collaborate. You can see my slides here.

The speaker club Q&A session focused on the potential and practicalities of improving collaboration. We talked about the need to have technical experts from each group engage directly with each other (in addition to our existing liaison managers working in both directions), opportunities to provide more introductory presentations in both directions so people not familiar with 5G or specific IETF work can learn more, and ways to identify potential 5G requirements that may yield IETF protocol dependencies early on, even if later analysis in 3GPP reduces the urgency of the need for IETF protocol work.

IETF 99 should serve as a useful opportunity to continue this dialogue and gain more clarity about what specific dependencies we might expect between the 5G plans and IETF work. As noted in my recent post about BOF proposals, we’ll have a slot on the agenda to discuss some of the network slicing work motivated by 5G, in addition to numerous hallway conversations and ad hoc discussions I’m sure. For those working on other aspects of 5G not covered in the BOF proposals and who may be looking for guidance or input about overlaps with IETF work, feel free to reach out to the IAB, the IESG, or our liaison to 3GPP, Gonzalo Camarillo, with questions and comments. Several of us have been working to understand the 5G requirements better and would be happy to hear from you.

Alissa Cooper
IETF Chair

New Work at Upcoming IETF 99 Meeting

Prague

Before each IETF meeting, the Internet Engineering Steering Group (IESG) collects proposals for new working groups. We decide which ones are ready for community discussion on the IETF meeting agenda, with input from the Internet Architecture Board (IAB). We did this last week in preparation for IETF 99 and I wanted to report the conclusions:

BANdwidth Aggregation for interNet Access (BANANA) will be having a working-group-forming Birds of a Feather (BOF) session at IETF 99. BANANA is concerned with providing coordinated Internet Access to a device over multiple links of different types to allow for increased bandwidth utilization, load-balancing and/or higher reliability. The goal of this BOF is to determine whether the scope of the problem is well defined and understood, whether there is a critical mass of participants willing to work on the problem, and whether in general the working group would have a reasonable probability of success if chartered. The BANANA mailing list is here.

IDentity Enabled Networks (IDEAS) will be having a working-group-forming BOF. The goal of this work is to standardize a framework that provides identity-based services that can be used by any identifier-location separation protocol. The new requirements driving this framework go beyond the traditional discovery service and mapping of identifier-to-location for packet delivery. The goal of the BOF is to identify what specific work items are appropriate for IETF standardization. The IDEAS mailing list is here.

Network Slicing (NETSLICING) will be having a non-working-group-forming BOF. In this work proposal, a “network slice” is conceptualized as a logical network comprised of the union of resources (connectivity, storage, computing), network functions, and service functions. Network slicing is a concept garnering much attention as part of 5G standardization and development efforts. The goal of the BoF is to identify whether a shared understanding exists of terminology, decomposition of the problem space, and relationships between the goals of the work and existing protocol work in other IETF working groups. Getting clarity on the priority of relevant requirements from 3GPP is also critical. The relevant mailing list is here.

We also received a proposal for a WG-forming BOF concerning 5G IP Access and Session Management Protocols (5GIP), which was not approved for this meeting cycle so as to provide more time for refinement. The responsible area director and others in the IESG and IAB who have been exploring the overlap between 5G and IETF work will continue to engage with the proponents to help gain more clarity, refine scoping, and understand overlaps with other SDOs.

Finally, we’ll have one newly chartered working group meeting for the first time at IETF 99: DKIM Crypto Update (DCRUP). The DCRUP working group is chartered to update DomainKeys Identified Mail (DKIM, RFC 6376) to handle more modern cryptographic algorithms and key sizes. The mailing list is here.

Looking forward to productive discussions in all these areas at IETF 99.

Alissa Cooper
IETF Chair

Increasing capabilities of advanced automatic crash notifications

This post is by Brian Rosen and Randall Gellens, participants in the ecrit working group.

A car crash on a country road.

Emergency calls placed by vehicles involved in a crash can provide significant benefit, especially when vehicle occupants are injured or unable to place a 9-1-1 call themselves. Sometimes called “Advanced Automatic Crash Notification” or “vehicle telematics”, the ability to automatically or manually place an emergency call when a vehicle is involved in a crash has been available for over two decades in the U.S., while the EU has a mandated system called “eCall” that is in the process of being deployed. Recently published IETF RFCs aim to expand the capabilities of such services, and to make them more broadly implementable.

Current U.S. systems are proprietary; some use non-standard in-band modems to send vehicle location and crash data from the vehicle to a call center, which then relays the information to the Public Safety Answering Point (PSAP, also known as an emergency call center). The relaying is done either by non-standard out-of-band data transmission or orally by a service center agent. Other systems place a 9-1-1 call, play a prerecorded message to the PSAP call taker, and use text-to-speech to convey vehicle location and sometimes crash data. The EU eCall system uses a standardized in-band modem to convey vehicle location and crash data from the vehicle to a specialized PSAP, which has a corresponding modem to receive the data.

The IETF has published two documents: RFC 8147 and RFC 8148 that specify how such calls operate using next-generation (all-IP) technology. Vehicles using these RFCs initiate emergency calls either manually or automatically in the event of a crash or other serious incident; the calls carry a standardized set of vehicle location and incident data. Such a call can be routed to a PSAP equipped for this, where the data can be automatically processed and displayed to a call taker at call assignment. During the call, the call taker can request that the vehicle send updated data or perform an action such as flashing its lights.

The IETF developed a generalized mechanism for making data related to an emergency call available to the PSAP along with the emergency call. This mechanism, called “Additional Data”, RFC 7852, allows standardized data “blocks” to be sent in a SIP (RFC 3261) call, either as data in the body of an INVITE message, or as a URL sent in the header which, when dereferenced, yields the data block. RFC 8148 defines a data block for the U.S. “Vehicle Emergency Data Set” developed by the Association of Public-Safety Communications Officials (APCO) and the National Emergency Number Association (NENA), while RFC 8147 defines a block for the eCall data set used in the EU. These RFCs also provide a mechanism for the call taker to request that the vehicle perform an action, such as honking the horn or flashing the lights to allow the responders to locate the vehicle.

– Brian Rosen and Randall Gellens

IETF Profile: Mirja Kühlewind

Periodic posts on the IETF Blog highlight individuals who serve in IETF leadership roles, people who have recently begun working in the IETF, and organizations that make the work of the IETF possible. Each post aims to describe experiences working within or supporting the IETF. This one is by Mirja Kühlewind, who is an IETF Transport Area Director. You can also see her interview here.

Mirja Kühlewind, IETF Transport Area Director at IETF 98.

Mirja Kühlewind, IETF Transport Area Director at IETF 98.

I first got involved with the IETF when I started my PhD. A colleague, who was already involved pointed out that it was starting work closely related to my own interests. I attended my first IETF meeting in 2010, when the CONEX [Congestion Exposure] Working Group (WG) held a Birds-of-a-Feather meeting. From then on, it was my own initiative that kept me working with the IETF—I had support from my group, and they usually had enough travel budget for me to attend the meetings.

Three years ago, I became chair of the RMCAT [RTP Media Congestion Avoidance Techniques] Working Group. I only gave that up when I became Transport Area Director (AD). I also was chair of the TCPINC Working Group for half a year. So I became an AD just six years after starting to participate in the IETF.

There are a limited number of people involved in the Transport Area. As soon as I became more active, I was encouraged to take the role of a Working Group chair. Transport AD wasn’t an option until I finished my PhD. Ultimately, though, it worked out nicely because I got stable funding for a project for a little more than two years, which freed me up to consider the position.

The project is generally funded by the European Union, with additional funding by Switzerland for my part, which includes work we planned to bring into the IETF. This would have allowed me to justify spending so much of my time on IETF work. However, since the project funding is coupled to certain research goals, I additionally contacted some companies and they provide support for some of my time and travel budget.

I hope that my experience as AD can count as management experience and that people value it. It’s a good way to improve your skills because you are in a management position where you don’t have any power, but you need to motivate people. For me, it is about how well I manage Working Groups and how well I manage my time. I spend 40% of my time on my AD work and 60% on my research project. It can be a challenge to balance them.

I don’t think that ETH directly benefits from me being Transport AD. But they did get external funding for our project, and that funding had a strong focus on making an impact on industry. So my standardization work may have helped to get the project funded. I don’t think I needed a leadership role for that. Being a Working Group chair was probably enough to show that I had IETF experience, but my AD role of course also makes a good impression.

Everybody’s biggest concern about taking on an IETF leadership role is time management. I do it on a 40% basis. It’s a little stressful, yes, but it is possible. The other reason it’s hard to find people for the Transport AD role is that the right person not only needs support, money, and time for the IETF, but also must have an overview about what’s going on in Transport. I was in the unique position that I was following the same Working Groups that I now carry as AD—it’s no extra effort.

I don’t have a plan yet for when my term is over, but I know I’d like to stay involved in the IETF. When my ETH project is finished, I’ll be a four-year post doc. I’ll need to make a decision about whether to stay in academics or go into industry. If I apply for a job next year, I won’t stand as Transport AD—I can’t ask a new employer to let me spend 40% of my time on the IETF. Even as a professor, it would be hard for me to get 40% of my time off for the IETF.

It’s been an interesting experience, particularly because I’m just starting my career. I’ve learned a lot, and I’ve made a lot of industry contacts that I’ve gotten to know well. I’m grateful—the IETF as a community has provided me with networking opportunities and a source of ideas for research.

 

IESG Retreat

Montreal skyline.

Montreal skyline. Photo by Taxiarchos228 CC BY 3.0

The IESG held its annual retreat last week, meeting one day jointly with the IAB and two days on our own in Montreal, Canada. With several new members joining us as of the last IETF meeting, it was a good opportunity for everyone to spend more intensive time discussing hot topics and getting to know one another.

We focused a significant amount of our time together discussing the interaction between increased use of encryption, information available to observers on the network path, and existing operational practices. This has been a frequent topic of conversation in a variety of venues in the IETF as of late, including the MaRNEW workshop, numerous BoFs, charter and document discussions in the QUIC, TLS, OPSAWG, SAAG, and RTCWEB working groups, and on the IETF discussion list.

We examined the topic from a variety of angles. With the IAB we talked about the relative merits of signaling information explicitly versus implicitly, whether replacing implicit signals (about, say, path resources) with explicit signals could be viewed as an architecturally sound design approach, and what the real-world impacts of such a shift might be. We followed that up with discussion amongst IESG members about how to recognize proposals early on in the IETF process that could carry with them significant implications for current approaches to network manageability. As a next step we agreed amongst ourselves to flag such proposals for each other during our bi-weekly informal telechats to increase the likelihood of early cross-area review. Finally, we debated an approach being taken in the security community towards encryption of “all the things” — not things as in IoT, but things as in everything, including identity information, IP-level routing information, operations on data at rest, and a number of other “things” for which the robust application of encryption is still in nascent stages. The discussion teased out differences in perspective about the notion of which entities on the network might be perceived as trusted, or be perceived as attackers, under different network scenarios (e.g., enterprise versus consumer). I can’t say that we ended up with consensus on the topic as a whole, but we did garner greater appreciation of each others’ perspectives, and individual ADs are likely to funnel our conversation into broader community discussions.

IESG at work.

IESG at work.

We also spent some time considering ideas to help spur further interaction between standards development in the IETF, development of running code, and open source efforts in the industry. In particular, we talked about ways to allow for working groups to iterate more quickly on YANG models, both from a tooling and a process perspective. We also had Charles Eckel and John Brzozowski join us remotely to brainstorm about future improvements to the IETF Hackathon and Bits-n-Bites events to support more opportunities for participants to collaborate on implementations and showcase works-in-progress. We don’t have concrete details to share on either of these fronts just yet, but we hope to have updates in the near future.

It wouldn’t have been an IESG retreat without some of our more typical housekeeping discussions. This year we touched on a number of IANA-related issues, discussed RFC sub-series, guidance concerning BoFs and side meetings, IETF communications, the future trajectory for remote participation, a suggestion to have more shorter WG meeting slots, and a variety of other issues. All in all, the retreat was a good opportunity for IESG members to gain insights into how we’re each approaching challenges and opportunities big and small in the IETF, and how we can collaborate for the benefit of the IETF community.

Alissa Cooper
IETF Chair

Routing Area Update after IETF 98

Overhead photo of the Circle Interchange in Chicago. Photo by Stratosphere. <a href="https://creativecommons.org/licenses/by-sa/4.0/">(CC BY-SA 4.0)</a>

Overhead photo of the Circle Interchange in Chicago. Photo by Stratosphere. (CC BY-SA 4.0)

As Routing Area Directors, we have now made it a habit to share some of our thoughts after each IETF meeting.  This is a short summary of some of the highlights from the recent one in Chicago.

YANG continues to be a focus in routing and the IETF as a whole.  While in Chicago, many working groups had YANG models in their agenda and even specific sessions to review and move the work forward, as was the case for the joint meeting of the mpls, ccamp, pce and teas WGs.  The importance of some of this work is reflected in a recent IETF Journal article: Working Group Update: Microwave Modelling at CCAMP.

The netmod WG is making progress with the Network Management Datastore Architecture (NMDA) that will allow access to system-derived state (think of new interfaces learned by inserting a line-card) and cleaner access to operational state versus configuration by having explicit operational and intended datastores.  Knowing this future direction allows us to recommend how affected YANG models can be finalized in RFCs.

Along with the Operations and Management (OPS) ADs, we will be sending out precise guidance (thanks to lots of work from the NetMod DataStore Design Team and the Routing YANG Architecture Design Team) that should allow most YANG models to be completed and implemented without delay.  To summarize, the models should include all configuration and state in a single normative module to be NMDA-ready.  When needed, it should have an optional state module that provides access to the state not otherwise accessible until NMDA and the associated protocol work is implemented.  Of course, there are rarely one-size-fits-all rules and this guidance is no different; the exceptions will need to be individually discussed.

During the Routing Area Open Meeting we were lucky to hear from the two Applied Networking Research Prize (ANRP) winners for IETF 98.  Both presentations focused on BGP, one on a framework to analyze live and historical data (BGPStream), and the other on accelerating the deployment of BGP origin and path validation.  Both talks resulted in interesting conversations with the audience and continued interaction throughout the meeting and beyond.

In the period before IETF 98 the spring WG has accelerated the work on several of the key documents that will open the way for the segment routing-related extensions defined throughout the area, including use cases and the Segment Routing Architecture document. The chairs opened the discussion about which topics should be the subject of an updated charter for this WG.

The sidr WG is also close to finishing its charter – just a couple of documents remain.  While it didn’t meet in Chicago, the participants met as part of the new sidrops WG (in the OPS area).  This transition from the development of solutions (origin and path validation in this case) to the understanding that implementation and deployment should now take center stage is critical to this work, but also to the routing area in general.   Another good example of the same trend is the trill WG, which is also finishing up the currently chartered work; additional work based on strong operational needs may still be considered.

It is very important for the WGs in the area to keep a close eye on operational requirements, experience and feedback.  Besides sidrops, other WGs also deal specifically with the operations of routing-related technology, including grow and mboned.

One of the topics for discussion in rtgwg was “Routing in the DC”, which has several proposals on how to make routing in the data center more efficient, scalable and flexible.  This session served as a review of some of the proposals and the opportunity to listen to operators express their needs in response.  We look forward to a continued and tighter participation from the operations community.

The detnet WG is making good progress on their Deterministic Networking use cases, architecture, and selecting a data plane.  The DetNet Data Plane Design Team gave an update on their work and proposed their DetNet Data Plane solution document is ready for adoption.

The bier WG has made excellent progress with a unified encapsulation that will work for MPLS and Ethernet.  The WG took an important step forward in deciding to continue with the WGLC and eventual publication of their work as Experimental RFCs, instead of waiting for deployment experience and pursuing the Standards Track.  The decision was aided by the existing ability to do RFC Status Changes without impact to the IANA registries or even the RFC number.

The babel WG had a good discussion on adding unicast hellos to the protocol; several implementers have use-cases that need them.  The WG remains small but active with focused technical discussions and an emphasis on experimenting with ideas via implementations before agreeing on changes.

NVO3 discussed the recommendation of its Design Team to select an updated Geneve as the standards-track encapsulation.  The consensus is being verified on the mailing list.  For the second IETF meeting, nvo3 experimented with meeting twice and having small group discussions on data-plane, control-plane, and security.  The WG could use more folks interested in discussing security extensions and use-cases.

While we would like to walk through every WG in the area, this summary is meant to be just that, a summary.  If you want more information, please take a look at the IETF 98 Routing Area Working Group High-Level Summary that the WG Chairs put together.  Just a few more work items that are worth listing here:

  • The lisp wg is making progress with the transition of their core experimental documents, RFC 6830 and RFC 6833, to the Standards Track.
  • The teas wg has been refining the ACTN (Abstraction and Control of TE Networks) Framework and Requirements
  • The Stateful PCE work continues in the pce WG with the imminent publication of the PCEP Extensions for Stateful PCE

IPv6 is not optional – the need to specify, implement and deploy IPv6 is well understood by everyone.  During the Routing Area meeting we made a call for the WGs to consider the deployment of IPv6-only networks, and the transition to them from IPv4-only and dual-stack implementations.  The intent is to consciously find and fill any specification gaps as vendors and operators clearly work towards that goal.

As Area Directors, our interests go beyond the technical work and into, for example, the topics of outreach and remote participation.  To that effect, we are working on efforts to better characterize and eventually measure the participation at remote hubs, from new people to the IETF all the way to established contributors.  Also, we are currently involved in other IESG-wide topics including side meetings, BoFs and the application of the Note Well.  We welcome any comments or ideas about these topics or others that you think the IESG should be addressing.

Finally, the IETF meeting in Chicago marked the start of Deborah and Alvaro’s second term as Area Directors.  We are very happy to be able to continue to work together.

Alia, Alvaro, Deborah (Routing Area Directors)

Looking Ahead, Facing Change

About a month ago I officially took on the role of IETF Chair. My predecessor Jari Arkko noted upon beginning his term as chair just how much can change from one chair’s term to the next. As I’ve started settling into my new role over these last weeks, I’ve been thinking a lot about what has been changing and what has been staying the same in the IETF.

Past and present IETF Chairs with IETF Senior Meeting Planner Marcia Beaulieu. From left, Fred Baker, Jari Arkko, Alissa Cooper, Marcia Beaulieu, Russ Housley, and Harald Alvestrand.

Past and present IETF Chairs with IETF Senior Meeting Planner Marcia Beaulieu. From left, Fred Baker, Jari Arkko, Alissa Cooper, Marcia Beaulieu, Russ Housley, and Harald Alvestrand.

When I first started participating in the IETF, it didn’t take long for me to realize the importance of the IETF as a venue for creating the building blocks of the internet. The significance of the IETF derives from the combination of what we choose to work on and how we carry out that work. Producing core standardized protocols wouldn’t have nearly the same impact on the internet as the existing body of IETF work if it were done behind closed doors, if a single constituency could dictate the outcome, or if broad interoperability were not the main objective. To my eye, the core principles of the IETF process – open participation, cross-area review, and consensus – contribute to the success of IETF protocols in tandem with the design choices and technical trade-offs inherent in protocol design.

Of course, those process features are also often cited as drawbacks of IETF participation. “The IETF moves too slowly,” some people say. “They’re not adaptable,” “they can’t compete with open source,” “the biggest players aren’t interested in consensus.” Sound familiar? Sure, it’s true more often than not that if you’re trying to find agreement among a large, heterogeneous pool of people, that will require a different investment of work and time than deciding things among you and your close group of friends, or hacking something together all on your own. The challenge I see for the IETF in the coming years is to preserve the benefits of the essence of the IETF model while adapting to changes in the industry and the environment. With collaborative styles of engagement flourishing across both open source and standards development, there is a lot of opportunity for synergy.

How can we do a better job of integrating our work with open source development efforts? How can we evolve our tools and processes to align with how software is being developed and deployed today? How might we apply the model of cross-area review and consensus more broadly than to static text specifications? How can we evolve the administration of the IETF to give the community more flexibility and room to experiment? I have my own thoughts about these questions, but far more important are the ideas and efforts of the IETF community.

Personally I think we have many reasons to be optimistic about tackling these questions, based on recent IETF standards development work as well as ongoing community conversations and activities. Over the last several years we’ve seen protocol development efforts deeply intertwined with and informed by running code, with the concurrent development of 10 or more independent implementations, for instance in the case of HTTP/2 and TLS 1.3. We’ve seen broad interest across the industry in the kind of security expertise that has become a hallmark of the IETF, and resulting security and privacy improvements being developed for web, email, DNS, DHCP, real-time, and other kinds of traffic. We’ve seen tremendous energy behind the specification of YANG data models and their integration across the industry into standards processes. And community discussion and activity continues to grow around the IETF Hackathons, use of Github, remote participation, and IASA 2.0.

I’m excited to work with the community on how we face the changes around us while retaining the core of what makes the IETF most effective. We have lots of existing venues for discussions of specific aspects of this, but of course you can always send me your thoughts or post them to the IETF discussion list.

YANG Catalog Latest Development (IETF 98 Hackathon)

The IETF 98 is now over. This was a successful IETF meeting in multiple ways, one of which is the IETF hackathon, two days of hacking on Saturday/Sunday.

Before delving into the hackathon results, let’s briefly review the YANG « state of the union ». As you know, YANG became the standard data modeling language of choice. Not only is it used by the IETF for specifying models, but also in many Standard Development Organizations (SDOs), consortia, and open-source projects: the IEEE, the Broadband Forum (BFF), DMTF, MEF, ITU, OpenDaylight, Open ROADM, Openconfig, sysrepo, and more. Here is a nice summary presentation “SDN and Metrics from SDOs” by Dave Ward during the Open Networking Summit 2017.

This data model-driven management applies throughout the automation « stack »: from data plane programmability with the fd.io honeycomb open-source project that exposes YANG models for VPP functionality via NETCONF and RESTCONF, to operating systems with the sysrepo open-source project (Sysrepo is a YANG-based configuration and operational datastore for Unix/Linux applications), to network control via the networking specifications in IETF/IEEE/BBF/openconfig/etc., to service models specification (YANG Data Model for L3VPN Service Delivery as an example), to controller and orchestrator and open-source projects such as openconfig), to cloud and virtualization management, without forgetting the orchestration and policy aspects (for  example, the MEF Livecycle Service Orchestration).

all-yang-modulesWith the rise of data modeling-driven management and the success of YANG as a key piece comes a challenge: the entire industry develops YANG models but those models must work together in order for operators to automate coherent services. And they must work together NOW. We don’t have the luxury to work in well planned sequences and all the modules. On one side, the YANG modules are constantly improved, and on the other side, they depend on each others.

In order to resolve this challenge, we’ve been working during IETF hackathons to provide the right open-source tools for the industry. Previous results after the IETF 97 have been highlighted.   At this IETF, a team of around 10 people went one step further by integrating tools around a YANG catalog.

Note: I went the easy way to create this blog with « we » as opposed to stress individual achievements, but many thanks to Joe Clarke, William Lupton, Einar Nilsen-Nygaard, Gary Wu, Mahesh Jethanandani, Radek Kreji,  Sudhir Rustogi, Abhi Ramesh Keshav, Carl Moberg, Rob Wilton, Miroslav Kovac, Vladimir Vassilev, and more.

What is the idea behind a YANG catalog?

From a high-level point of this YANG catalog goal is become a reference for all YANG modules available in the industry, for both YANG developers (to search on what exists already) and for operators (to discover the more mature YANG models to automate services). This YANG catalog should not only contain pointers to the YANG modules themselves, but also contains metadata related to those YANG modules: What is the module type (service model or not?) What is the maturity level? (for the IETF: is this a RFC, a working group document or an individual draft?), Is this module implemented? Who is the contact? Is there open-source code available? And we expect much more in the future. The industry starts to understand that the metadata related to those YANG modules become equally important as the YANG modules themselves. We based on work on openconfig catalog, as a starting point but we realized that we have slightly different goals.

The YANG catalog added value, compared to a normal Github repository resides in the toolchain and the additional metadata:

  • the ability to validate YANG modules (including IETF drafts) with multiple validators.
  • the related metadata regarding implementation
  • the ability to visualize the dependencies between YANG modules, including the bottleneck in case of standardization
  • the search capabilities on any YANG type and metadata, based on http://yangcatalog.org/yang-search/yang-search.php. Avoiding this way, models or model parts redefinition, which is costly to integrate.
  • the REST APIs to query and post any content
  • the demonstration of data model-driven management with open source tools:
    YANG Explorer (a GUI-based tool to explore modules, generate some code, and connect the devices)
    YANG Development Kit (a more advanced tool for code generation)

Using this one-stop set of tools, the typical flow for a YANG module designer is to validate the YANG module and to populate the YANG catalog (via an IETF drafts, via Github, or directly via the YANG catalog).

And the typical flow for a YANG module user is to search for an existing YANG module, to look up the metadata (such as maturity level, implementation, etc.), and to look up the import and include dependencies if any. Once the YANG module of choice is found, the YANG module user would browse the YANG module content, then load the YANG module in the YANG Explorer and test it by connecting to a NETCONF or RESTCONF server, and finally generate python scripts to start the automation.

This is obviously work in progress and contributions are welcome, as everything is open-source.

Practically, what have we done during this IETF 98 hackathon?

The YANG validator improved, with yanglint as an additional validator and with specific plugins that will check the correct format for the IEEE, MEF, BBF, and Cisco-specific plugins (practically, they will check the urn and prefix).

The YANG DB search laid a framework for the multi-SDO impact analysis, including a color scheme for the standard maturity levels.  Below is an example.YANG-DB-search

Regarding YANG module validation, William Lupton from the Broadband Forum added all the BBF modules to the YANG catalog, including the work-in-progress ones (this means 192 modules in total). While validating those modules, we discovered some issues with some validators. Those issues are now solved and most BBF YANG modules validate correctly. Finally, William updated the YANG catalog With the BBF modules maturity level that will distinguish draft and ratified YANG modules. The example here BBF work-in-progress YANG modules depending on the IETF interface YANG module [RFC 7223].

We created a script to push, as a cronjob, all extracted YANG modules from IETF drafts into Github (we still have to integrate it in the tool chain btw). Background: it should be a priority for the IETF to separate the draft text from its « code » content, such as a YANG module, so that we don’t have to extract YANG module via tooling, and so that the YANG module could progress in Github independently of the draft version.

Another major achievement for this hackathon is the integration of the YANG Development Kit into the YANG Explorer. In this example below, you see the YANG Explorer with the GUI on the left-hand side and the generated python script.

YANG-ExplorerIn term of YANG module transition, we also created a script to help with the latest IETF YANG module guideline: a script to convert the -config/-state into a combined tree, to convert from a combined tree to generating an additional -state tree, and to convert from a combined tree to generating an Openconfig style tree. This will be pretty handy for the transition phase, and a good addition to the toolchain.

I have personally great hope for this YANG catalog, as it will solve a real issue in this industry. Now, we should not wait until the next hackathon to continue improving it. « Running code and consensus » I heard someone say!

And finally here is Packet Pushers podcast “YANG Models & Telemetry At IETF 98”, recorded at the very end of this IETF week. Always a pleasure to speak with Ethan Banks.

Regards, Benoit (Operations and Management Area Director)