For years, the IETF has been driving the industry transition from an overloaded Software Defined Networking (SDN) buzzword to data modeling-driven management. With a SDN pragmatic definition in mind, such that “SDN functionally enables the network to be accessed by operators programmatically, allowing for automated management and orchestration techniques; application of configuration policy across multiple routers, switches, and servers; and the decoupling of the application that performs these operations from the network device’s operating system.”, we’ve been executing on all of these requirements. Our IETF deliverables are: the YANG modeling language, protocols such as NETCONF or RESTCONF, encodings such as XML and JSON, and some YANG modules.
Let me start with a reflection. How do we know we’ve been fighting this uphill battle long enough and that it’s all downhill from here? Is it when we have published a core set of YANG models? When the technology is implemented by vendors? When the technology is deployed by operators? When other Standard Development Organizations/consortia/open-source projects embrace this technology? Probably most (or even all) of the above.
Now, an anecdote from this last IETF meeting. I was in a bar, connecting with IETF friends when, at some point in time, the discussion centered around YANG. In the past I would have lead the discussion, trying to convince and influence the crowd, but this time it was not necessary. I was quietly and happily sipping my beer while, Alia (an IETF routing Area Director), debated the importance of YANG. I enjoyed that moment so much and I remember observing that it is a success when someone else does your job. When someone else makes your speech. Then you know you can safely pass the baton. Now, downhill does not mean that there are no more issues to resolve, so let’s review the YANG models state of affairs.
1. The Network Management Datastore Architecture (NMDA) Impact
We keep specifying YANG modules at the IETF. See the graphical evolution here and all the published YANG data modules here. Why does it take so long, you may ask? Well, the world of standardization is never fast enough as quality and consensus come at a price. However, in this particular case, the main reason we aren’t fast enough is that we’re busy finalizing the “Network Management Datastore Architecture“, a new way to design YANG modules. To help grasp the concepts of this architecture we can look at pieces of the draft:
Network management data objects can often take two different values,
the value configured by the user or an application (configuration)
and the value that the device is actually using (operational state).
The original model of datastores required these data objects to be
modeled twice in the YANG schema, as "config true" objects and as
"config false" objects. The convention adopted by the interfaces
data model ([RFC7223]) and the IP data model ([RFC7277]) was using
two separate branches rooted at the root of the data tree, one branch
for configuration data objects and one branch for operational state
The duplication of definitions and the ad-hoc separation of
operational state data from configuration data leads to a number of
problems. Having configuration and operational state data in
separate branches in the data model is operationally complicated and
impacts the readability of module definitions. Furthermore, the
relationship between the branches is not machine readable and filter
expressions operating on configuration and on related operational
state are different.
With the revised architectural model of datastores defined in this
document, the data objects are defined only once in the YANG schema
but independent instantiations can appear in two different
datastores, one for configured values and one for operational state
values. This provides a more elegant and simpler solution to the
To illustrate the NMDA principles, below is a comparison of the pyang tree for the RFC 7223 “Original Datastores Model” ietf-interfaces on the left and the new NMDA-compliant RFC7223bis draft ietf-interfaces on the right. With NMDA, the “intended” and “applied” values would be reported in different datastores and the link between those “intended” and “applied” values is now machine readable. This will lead to a “cleaner” tree definition. Indeed, the interfaces-state sub-tree disappears in the new YANG module.
An easy way to check if your YANG module is NMDA-compliant is to look it up in the YANG catalog metadata tool. For example, the report for the new ietf-interfaces YANG module is here: the following entry shows the NMDA compliance.
I like this definition of “Soon”, just for the fun of it! To be more serious, with Routing Area Directors, we evaluated the situation of most IETF YANG modules, as one of the wrap up meetings at IETF 100. Many are currently in last call and/or under YANG doctors review and will end up on the IESG plate soon. I’ll do my best to progress those before I step down as Area Director next March.
The Tooling and YANG Module Metadata Become More and More Important
Producing YANG modules is one step in the right direction, but having the right tools and the right YANG module metadata (like health metrics) becomes equally important. Instead of repeating myself, let me point you to the progress on that front, in this YANG Catalog Latest Developments (IETF 100 Hackathon) recent blog.
Collaboration Across Standard Development Organizations
During the IETF 100, we had an IEEE/IESG breakfast meeting with a primary topic of discussion: YANG. We obviously discussed the NMDA compliance for the IEEE YANG modules. A single query to the yangcatalog produced the right output for our discussion: “return the latest version of all YANG modules where the organization is IEEE”.
From there, it’s easy to check whether each YANG module reports the tree-type metadata as “nmda-compatible”.
Next to IEEE, the Broadband Forum also inquired about the NMDA status, timeline, and possible transition in this liaison statement. With the NMDA architecture and the couple of key NMDA-complaint YANG modules being close to completion, it’s time to reply to those liaisons and also proactively warns the other SDOs, consortia, and open-source projects. Helping with the NMDA transition, we have to understand which specific YANG modules those SDOs care about (basically, which YANG modules they will augment) and work on a transition together; directly moving to the NMDA complaint version or still relying on the previous non-NMDA version?
YANG Module Update Procedure
YANG specifies strict rules [RFC7950] for updating YANG modules (when keeping the same YANG module name), imposing backward compatible changes. However, this causes some issues in the world of automation! For example, the YANG paths must be changed in controller/orchestration when an YANG module with a new YANG name is introduced. It is not possible to know that one YANG module obsolete or update another YANG module without going through a level of indirection that is the RFC document obsolete or update tag. The IETF, as openconfig did some time ago with the semver concept, faced the first occurrences of this issues. Some more background information in this IETF draft. which was discussed in the NETMOD Working Group.
A Lot of Telemetry Documents
Building on YANG, there was much work on YANG based Telemetry at IETF 100. These successes include an award at the IETF Hackathon, the progression of six adopted working group drafts in NETCONF, the addition of four new proposals from new authors, and NETCONF closing in on Working Group Last Call on three of these drafts. Based on the traction of the technology in the industry, it is also becoming relevant to working groups beyond NETCONF. Some progress includes:
The team from hackers.mu that participated remotely in the IETF Hackathon on 11-12 November 2017.
Hackers.mu is a developer group based in Mauritius made up of a wide range of people from different backgrounds: high school students, university students, professional engineers, and advisors to the minister of ICT. We participated remotely in the IETF Hackathons held in conjunction with IETF 98 and IETF 99 in the Automatic Multicast Tunneling (AMT) and Human Rights Protocol Considerations (HRPC) projects, respectively. After hearing about the recent changes happening in the TLS Working Group, we decided to work on TLS implementations for IETF Hackathon held just before IETF 100. We packed our laptops and headed to Pereybere, which is found in the north of the Island.
Hackers.mu remotely participating in the IETF Hackathon
We stayed at a very comfortable location with proper A/C. We deployed our network, and connectivity was provided via a 3G mobile dongle. A big thank you to the TLS champions who were very helpful and considerable on Instant Messaging. After we showed them our initial code, they directed us to a bunch of servers that we could use to test. Also, it was very helpful to work alongside the people actually implementing the next iteration of the TLS draft. We were able to see how they were changing the implementation to work around problematic middleboxes. We learned a lot.
We had 8 people from Mauritius and 1 Mauritian from Denmark: Codarren Velvindron, Nitin Mutkawoa, Pirabarlen Cheenaramen (working from .dk), Nigel Yong, Sheik Meeran
Ashmith Kifah, Muzaffar Auhammud, Yasir Auleear and Yashvi Paupiah. We worked on the following Open Source software: wget, curl, monit, ftimes, aria2c, stunnel nagios plugins and hitch. Our project presentation slides are available here. A few of our members woke up every morning and went for a 30 minute swim in the morning before going back to the Hackathon room. The beach was less than 5 minutes from our Hackathon venue.
Overall, it was a fun IETF 100 Hackathon, and we really enjoyed how intensive it was. Along the way, we learned a lot about TLS internals, and the subtle details of the different implementations. Also, we would like to thank the hardworking people behind the remote participation infrastructure. They have done an amazing job! We were able to watch live from Mauritius the IETF Hackathon awards session. The TLS team won a prize for best remote participation!
We are looking forward to participating in the next IETF Hackathon scheduled for 17-18 March 2018, just before the IETF 101 meeting in London.
IETF 100 wrapped up just over a week ago in steamy Singapore. In addition to our usual productive working group sessions, hallway conversations, and ad hoc collaboration, we took the opportunity to mark the milestone of the 100th meeting with looks backward and forward in the IETF’s trajectory (plus some bubbles and sweets) at the plenary session.
We also got to share our appreciation for three individuals who have been working in support of the IETF for many years: Ray Pelletier, our recently retired IETF Administrative Director; Jorge Contreras, who will be stepping down as the IETF’s legal counsel at the end of this year; and Nevil Brownlee, whose term as the Independent Submission Editor will conclude in February. Many thanks and best wishes go out to them!
We started off the week with the IETF Hackathon, whose attendance continues to swell. Two hundred participants spent the weekend collaborating in teams on a wide variety of IETF-related implementation projects. As this third year of IETF Hackathons comes to a close, we have many teams viewing it as a requisite part of their IETF experience, including those working on YANG, DNS, I2NSF, TLS, and more. This time around we saw a number of teams with maturing implementations put more focus on interop, which was exciting to see.
It seemed like every time you turned a corner at IETF 100 you would run into someone talking about encryption, network operations, and the interaction between the two. TSVWG and OPSAWG both hosted generalized discussions of these issues. The QUIC working group continued its extended discussion of the implications of exposing cleartext bit(s) in the protocol. And the plenary session provided an opportunity for exchange of views between the community and area directors on this topic. Discussion has continued on the mailing lists since the meeting’s conclusion, and as passionately as participants feel about this, we can expect it to continue apace.
In other security news, we had two productive BoF sessions — SUIT and TEEP — focused on different aspects of securely provisioning and updating IoT and other devices. Both sessions were useful for clarifying the scope of the proposed work, as well as the relationship between the two. There appears to be substantial interest in taking on new work in both cases if the charter details and potential interactions with other existing work efforts can be sorted out.
The Routing Area also held a BOF (DCROUTING) to discuss characteristics and requirements of routing in a data center and to gauge interest from the community in engaging in new work in this area. There is significant interest to work on new protocols which will be purpose-built to address the data center. The Area Directors and the proponents will work on proposed charters in the coming weeks.
On the non-technical side of things, we made good progress in our community discussion about re-factoring the IETF’s administrative arrangements, also known as IASA 2.0. The work of the IASA 2.0 design team was well received and the ensuing discussion narrowed down the set of options for further consideration and development in the coming months. On the meeting’s last day the IETF leadership had an opportunity to share the background and status of these discussions with the ISOC Board of Trustees, who expressed their willingness to work together as this project moves forward.
Many thanks to our meeting host Cisco, not only for hosting the meeting, but also for sponsoring the hackathon, putting on an excellent social event in the sublime S.E.A. Aquarium, and making a long-term commitment to support the IETF as an IETF Global Host. Our meetings wouldn’t be the same without their support and that of all our sponsors!
The YANG team delivered again at the IETF 100 hackathon. With our goal to help YANG model users and designers, we developed new automation tools. As a reminder, we have been present since the very first hackathon at IETF 92. Even though many were not physically present in Singapore, we represented a virtual team of many members. This virtual team included those who have worked through on the year on projects, including some full time on tool development and maintenance. The team won the “Best Continuing Work” award during this IETF 100 hackathon and this is well deserved. And no, I’m not THAT biased 🙂 .
Dave Ward stressed the importance of the YANG Catalog during his presentation at IETF 100 titled, “3 years on: Open Standards, Open Source, Open Loop.” (video here, slides here). His point (among others) was that the IETF should focus on the deployment of the product of the RFCs (so YANG modules in this case) as opposed to the RFC publication. Here are a couple of Dave’s relevant quotes: “Publishing an RFC SHOULD not be the metric for IETF success!”, “A technology is successful when it’s deployed.”, “Develop tooling & metadata at the same time as specification.”, “Create your dependency map and reach out to your IETF customers.”. The YANG catalog was taken as THE example in this train of thought.
At this hackathon, Joe Clarke and Miroslav Kovac demonstrated a new tool called YANG Suite, along with its integration with the YANG catalog set of tools and its integration with the YANG Development Kit (YDK).
You might remember the YANG Explorer tool, an useful tool, demonstrated a few IETF hachatons ago. It has been suffering from an important inconvenience: it is flash-based. YANG Suite is the next generation YANG Explorer, without this limitation.
YANG Suite automatically imports YANG modules (and dependent YANG modules) from the catalog. The YANG module trees are parsed and displayed. From there we can generate CRUD (Create, Read, Update, Delete) RPCs, by interacting with the GUI. YANG Suite also integrates with YDK, which facilitates network programmability using YANG data models. YDK can generate APIs in a variety of programming languages using YANG models. These APIs can then be used to simplify the implementation of applications for network automation. YDK has two main components: an API generator (YDK-gen) and a set of generated APIs. Today, YDK-gen takes YANG models as input and produces Python APIs (YDK-Py) that mirror the structure of the models.
If you prefer a different set of python tool, YANG Suite also offers ncclient python library for NETCONF server. If you want to generate APIs from another language, you can develop a new YANG Suite package. As always, the tools are opensource: This is work in progress and the documentation will follow soon.
With the goal in mind to create a full toolchain, we integrated the YANG Suite directly in the YANG catalog, as shown in the previous figure. What does it mean? From the YANG catalog, you search for the relevant YANG module(s), evaluate its relevance with some health metrics (validation result, maturity, number of import, etc.), check the related metadata, launch the YANG Suite, and generate the python script based on the introduced value in the GUI such as YANG module content, CRUD operations, datastore, etc. The end user can now focus on automation as opposed to having to know the YANG language. For the end user, YANG is a means to an end, and should be hidden.
For the IETF draft writers, Henrik Levkowetz added links to the YANG catalog metadata and impact analysis directly in the data tracker. A real gain of productivity! The next step is to work on the synchronization of the data, with an direct update as soon as the draft is posted.
Dependencies and dependent YANG modules
Use case: Provide a comprehensive store for metadata from which to drive tools
Tree type: NMDA, transitional-extra, openconfig
Use case: Illustrate whether or not modules are NMDA-compliant
Add leafs for semantic versioning (semver): semantic_version and derived_semantic_version
Use case: Given a module, compare its semantic version over multiple revisions to understand what types of changes (e.g., backward-incompatible changes) have been made.Do the same given a vendor, platform, software release for all modules
Below is an example of semantically different YANG module revisions.
Vladimir Vassilev worked on automated testcases, running against confd and netconfd based raw NETCONF session scripting. Practically he focused on the the ietf-routing YANG modules in netconfd, both the non-NMDA and the NMDA (in progress) versions. I believe associating some validation tests with YANG modules in the catalog would be an extremely useful addition. Recently, I received the feedback that working on very simple scenarios for service YANG modules is a complex task: working from example is the way to go.
IETF 100 is just around the corner. It will offer all the usual opportunities for high-bandwidth exchange among IETF participants and collaboration around specs, coding and interop work. See the post below for some highlights. With the 100th meeting being viewed as a milestone by some, we’ll also be marking the occasion in a few small but special ways here and there throughout the week. Be sure to look out for those on the ground in Singapore.
We will once again be hosting the Hackathon on Saturday and Sunday. We’ll have a number of teams returning to carry forward their work from past hackathons, plus teams bringing new projects focusing on IPv6 transition technologies, JMAP, and more.
Folks are invited as always to join the Code Sprint on Saturday to work on tools for the IETF community. We’re always looking for more volunteers, so please join!
Sunday afternoon’s tutorial sessions will focus on two standardization efforts nearing completion in the IETF: TLS 1.3 and WebRTC. Come learn from the experts!
The two working-group-forming Birds of a Feather (BoF) sessions at this meeting will both be in the security area. Trusted Execution Environment Provisioning (TEEP) aims to standardize protocol(s) for provisioning applications into trusted execution environments (TEEs). Software Updates for Internet of Things (SUIT) is looking at firmware update solutions for Internet of Things (IoT) devices. Energy and interest in solutions to securely bootstrap constrained devices onto the network continues to grow.
We’ll have two working groups meeting for the first time, both in the Applications and Real-Time (ART) area. The DNS over HTTPS (DOH) working group is standardizing encodings for DNS queries and responses that are suitable for use in HTTPS, allowing the DNS to function in environments where problems are experienced with existing DNS transports. The Email mailstore and eXtensions To Revise or Amend (EXTRA) working group is dealing with updates and extensions to key email related protocols. Also meeting for the first time will be the proposed Decentralized Internet Infrastructure Research Group (DINRG), which is investigating open research issues in decentralizing infrastructure services such as trust management, identity management, name resolution, resource/asset ownership management, and resource discovery.
Folks looking for interesting area-wide discussions might want to check out the open area meetings in the transport and routing areas. The former will feature a discussion about current practices in coordinating specs and interop testing for QUIC and HTTP, while the latter will include an update from the routing area YANG architecture design team.
While for some the 100th meeting is an occasion to reflect on the IETF’s history, the technical plenary will be taking a look forward. The plenary will present a panel discussion featuring Monique Morrow, Jun Murai, and Henning Schulzrinne. They’ll be sharing their unique perspectives on what the Internet will look like in thirty years.
We’ll be running a new experiment at this meeting to give working group chairs the ability to organize sessions focused on running code. This will allow for groups to informally meet to brainstorm, code, and test ideas in the Code Lounge, a portion of the IETF lounge set aside for such activities. Working group chairs can sign up to reserve a time slot.
We wouldn’t be able to hold IETF meetings without the support of our sponsors. Big thanks to IETF 100 host Cisco! And to all of our sponsors for the meeting.
HTTPS (HTTP over TLS) is possibly the mostwidely used security protocol in existence. HTTPS is a two-party protocol; it involves a single client and a single server. This aspect of the protocol limits the ways in which it can be used.
The recently published RFC 8188 provides protocol designers a new option for building multi-party protocols with HTTPS by defining a standardized format for encrypting HTTP message bodies. While this tool is less capable than other encryption formats, like CMS (RFC 5652) or JOSE (RFC 7516), it is designed for simplicity and ease-of-integration with existing HTTP semantics.
The WebPush protocol (RFC 8030) provides an example of the how the encrypted HTTP content coding could be used.
In WebPush, there are three parties: a user agent (in most cases this is a Web browser), an application server, and a push service. The push service is an HTTP server that has a special relationship with the user agent. The push service can wake a user agent from sleep and contact it even though it might be behind a firewall or NAT.
The application server uses the push service to send a push message to a user agent. The push service receives a message from the application server, and then forwards the contents of the push message to the user agent at the next opportunity. It is important here to recognize that the push service only forwards messages. It has no need to see or modify push messages. Both the user agent and the application server only communicate via the push service, but they both want some assurance that the push service cannot read or modify push messages. Nor do they want the push service to be able to create false push messages.
For example, an alerting service might use WebPush to deliver alerts to mobile devices without increased battery drain. Push message encryption ensures that these messages are trustworthy and allows the messages to contain confidential information.
The document draft-ietf-webpush-encryption, which was recently approved for publication as an RFC, describes how push messages can be encrypted using RFC 8188. The encrypted content coding ensures that the push service has access to the information it needs, such as URLs and HTTP header fields, but that the content of push messages is protected.
The IAB held a workshop on Explicit Internet Naming Systems last week in Vancouver, B.C., and there are a couple of interesting early conclusions to draw. The first conclusion is actually about the form of the workshop, which was an experiment by the IAB. While many of our workshops run like mini conferences, with paper presentations and follow-on questions, this workshop was structured as a retreat. There was a relatively small number of participants gathered around a common table space, with sessions organized as joint discussions around specific topics. Moderators kept the conversations on topic, and discussants kept it moving forward if it lagged.
The result was one of the most interactive workshops I’ve attended. While we did have to run a queue in most sessions (and the queues could get a bit long), the conversations had real give-and-take, more like an IETF hallway discussion than a series of mic line comments.
While I don’t expect that this style would be appropriate for all our workshops, it’s useful to know that this retreat style can work. I suspect we would use it again in other situations where the IAB is trying to step back from the current framing of an issue and synthesize a set of new approaches.
A second early conclusion is that the IAB was right in suspecting that its previous framing of the issues around Internet naming and internationalization wasn’t quite right. Among other things, that framing had us trying to push human interface considerations up the stack and away from the protocol mechanics that worked on what we saw as identifiers. One clear conclusion from this workshop was that the choice of identifier structure and protocol mechanics will constrain the set of possible human interfaces. When those constraints don’t match the needs of the human users, the resulting friction generates a lot of heat (and not much light). One suggestion for follow on work from the workshop will be to document the user interface considerations that arise from using different types of identifiers, so that new systems can recognize more easily the consequences of the identifier types they choose.
An additional point that came up multiple times was the role of implicit context in transforming references in speech or writing into identifiers that drive specific protocol mechanics. While the shorthand for this is “the side of the bus” problem, the space is much larger and includes heuristic search systems ranging from the educated guess through to highly personalized algorithmic responses. The participants saw a couple of possible ways in which standards developed in this area might advance how these tuples of context elements and references can be safely used to mint or manage identifiers. A first step in that will be to suggest that the IAB look at language tags, network provider identifiers, and similar common representations of context to see how they function across protocols. Follow on work from that might include developing common vocabularies, serialization formats, and analyzing privacy implications.
Like many others, I came away from the workshop with the realization that there is a dauntingly large amount of work to be done in this space. The workshop participants are drafting more than a half dozen follow-on recommendations for the IAB, as well as describing a potential research group and producing some individual drafts. Despite the amount of work facing us, I and many other participants left the room more hopeful that we came in, both that we can make progress and that some of the tools we need are already available.
If you’d like to join in the conversation, you can share your comments on Internet naming by email to firstname.lastname@example.org or directly with the IAB at email@example.com.
Before each IETF meeting, the Internet Engineering Steering Group (IESG) collects proposals for Birds of a Feather (BOF) sessions. These sessions are designed to help determine whether new working groups should be formed or to generate discussion about a topic within the IETF community. We decide which ones are ready for community discussion on the IETF meeting agenda, with input from the Internet Architecture Board (IAB). We did this last week in preparation for IETF 100 and I wanted to report the conclusions:
Software Updates for Internet of Things (SUIT) will be having a working-group-forming BOF session at IETF 100. The SUIT work is focused on developing a modern interoperable approach for securely updating the software in Internet of Things (IoT) devices. Security experts, researchers, and regulators recommend that all IoT devices be equipped with a secure firmware update mechanism, but current approaches are largely proprietary. The SUIT BOF will discuss an architecture for IoT firmware updates and a manifest format for describing meta-data about firmware images. The SUIT mailing list is here.
Trusted Execution Environment Provisioning (TEEP) will be reconvening for a second BOF after an initial session at IETF 98 and a tutorial at IETF 99. The goal of TEEP is to standardize protocol(s) for provisioning applications into secure areas now supported on some computer processors, known as Trusted Execution Environments (TEEs). TEEs are currently found in home routers, set-top boxes, smart phones, tablets and wearables. Most of these systems use proprietary application layer protocols. TEEP aims to produce an interoperable application-layer security protocol that enables the configuration of security credentials and software running in a TEE. The TEEP mailing list is here.
Data Center Routing (DCROUTING) will be having a non-working-group-forming BOF. Over the last year, there have been discussions in a number of routing area working groups about proposals aimed at routing within a data center. Because of their topologies (traditional and emerging), traffic patterns, need for fast restoration, and need for low human intervention, among other things, data centers are driving a set of routing solutions specific to them. The intent of this BOF is to discuss the special circumstances that surround routing in the data center and potential new solutions. The objective is not to select a single solution, but to determine whether there is interest and energy in the community to work on any of the proposals. The mailing list is here.
IETF Administrative Support Activity 2.0 (IASA 2.0) will be having a non-working-group-forming BOF to continue discussions that have been taking place over the last year regarding refactoring the IETF Administrative Support Activity (IASA). The IASA 2.0 design team has been incorporating feedback from IETF 99 and further refining and expanding their documentation of the problem, requirements, and solution options. The goal of this session will be to determine the sense of the community about the direction for IASA 2.0. The mailing list is here.
We also received a proposal for a WG-forming BOF concerning Common Operation and Management on Network Slicing (COMS), focused on standardizing an information model to support network slicing in 5G. While the scope of this work has narrowed considerably since IETF 99 based on feedback received there, the new proposal was not approved for this meeting cycle. Further work is needed. The Operations and Management (OPS) area directors and interested IAB members will continue working with the proponents prior to IETF 100. The Operations and Management Area Working Group (OPSAWG) may serve as a venue for related discussions if that work bears fruit.
Finally, we’ll have two newly chartered working groups meeting for the first time at IETF 100: Email mailstore and eXtensions To Revise or Amend (EXTRA) and DNS over HTTPS (DOH). EXTRA is chartered to work on updates, extensions, and revisions to the email-related protocols IMAP, Sieve, and ManageSieve. DOH will be standardizing encodings for DNS queries and responses that are suitable for use in HTTPS, enabling the domain name system to function over certain paths where existing DNS methods experience problems. The mailing lists are here: extra, doh. A third new working group, IDentity Enabled Networks (IDEAS), was proposed but not chartered due to a number of concerns expressed during IETF community review of the charter.
Together with the rest of the IETF’s ongoing work, it will be exciting to see all of the new efforts kick off in Singapore.
With the intention to encourage the development of a solution to an issue currently under discussion within an IETF working group, I wanted to offer a personal view of a possible ways forward. This view is informed by a number of conversations with people involved in the discussion with different perspectives. However, the following does not represent nor is it intended to suggest an IETF position. The work to develop that remains to be done.
One of the strengths of the IETF process is that it brings together a diverse set of technical specialists—network operators, academics, developers, and protocol designers. The IETF seeks broad participation in its standards development processes because it leads to more robust standards—ones that work better and are more broadly applicable to the many different use cases found on the global Internet. However, it is not uncommon for implementers to learn about changes in protocols close to their publication date. This is an opportunity to encourage those using IETF standards to stay informed, at a minimum by reading the appropriate mailing lists.
In my opinion, unless voices are heard and solutions are found, the objective of end-to-end encryption to protect users privacy will be limited as a result of deployment challenges. I fully agree that end-to-end secure communications is necessary to protect the security of Internet sessions, end users privacy, and anonymity for human rights considerations. Unless we look at the obstacles and find ways to fix them, we won’t reach the goal of end-to-end security.
For the past several years, the IETF has been working to strengthen the Internet against pervasive monitoring. In line with that effort, the TLS Working Group has been developing Transport Layer Security (TLS) 1.3. TLS 1.3 is designed to improve security and protect information sent over the Internet that from being intercepted and decrypted by unauthorized entities. While RFC7258 had already recommended against use of static RSA keys for TLS 1.2, this was formally deprecated in TLS 1.3 in favor of the more secure key exchange based on the Elliptic Curve Diffie-Hellman algorithm. These are important steps towards protecting end user privacy and the security of TLS protected sites to keep pace with the evolving threat landscape.
In the case of TLS 1.3, which is nearing completion, enterprise data center operators have recently proposed a deployment method that would enable intercepting and decrypting traffic within a data center through the use of a static Diffie Hellman key. This proposal would continue a method of intercepting and monitoring traffic similar to what is in place for all previous versions of TLS and SSL with static RSA keys. The intention is to restrict the application of this method for use within a data center only, and not with connections to the Internet.
Some data center operators need the ability to look at network traffic for transaction monitoring because of regulations. At the same time they want to adhere to best practices by encrypting network traffic and thereby protect against malicious interceptions. While the proposed scheme for TLS would address this need for monitoring and is applicable to existing techniques used in some data centers, it should be noted that this is not the only possible technical approach that could enable encrypted traffic transmission in data centers as well as regulatory-required monitoring.
Within this use case, client TLS sessions over the Internet are not intended to use this proposal, but rather to maintain forward secrecy (likely not using the 0-RTT option) with the sessions terminating at the edge of the data center. The proposal is intended for use within the data center where both ends of the encrypted sessions are managed by the enterprise.
Others within the TLS working group have voiced concerns about the proposed approach, as there is no technical way to limit it’s usage to the data center. In other words, the method could be used for wiretapping purposes by a third party if it were deployed for a connection over the Internet. The TLS working group agreed to continue to discuss the issue, but how to solve the problem is uncertain at the moment.
The proposal is not likely to be further considered. With this in mind, a discussion on alternate approaches to meet the use case requirements could be quite useful. It may be possible to adapt existing work in the IETF, not necessarily TLS, to meet the requirement, should the IETF choose to work on this problem.
If TLS 1.3 sessions were terminated at the network edge, another solution could be used within the data center. One possibility is the use of IPsec. While all data centers do not have the same needs, some are working toward use of protocols such as IPsec or tcpcrypt operating at the IP and TCP layers rather than the application layer.
Could IPsec be adapted to meet requirements? IPsec transport mode isn’t well deployed, and has some interoperability issues, but this may present an opportunity to work towards a solution outside of TLS. There are also multiple group keying solutions already defined for IPsec including Group Domain Of Interpretation (GDOI) [RFC6407] and Multimedia Internet KEYing (MIKEY) [RFC3830]. I haven’t seen a proposal yet for a protocol designed fit for purpose, but that could be of interest too. My impression from the set of requirements is that we do have time to work collaboratively to a solution.
It may be of interest to know the I2NSF working group is reviewing a proposal from data center operators in cooperation with the IPSecME working group to automate deployment of IPsec tunnels within a data center; a very productive interim call took place on 6 September specific to this proposal.
A third possibility is a multi-party session transport encryption protocol designed specifically for the data center monitoring use case, of which none have been proposed to date to my knowledge.
A longer term solution is to determine what is missing from application logging and endpoint resources to maintain end-to-end encryption and eliminate monitoring via interception. There are scaling issues with pushing these functions out to the endpoint, so perhaps there are other methods that could be used to enable monitoring functions without exposing potentially sensitive information that may or may not be privacy related. This will take lots of work and I encourage application protocol developers to think toward solutions.
While the exact way forward is yet to be determined, I believe that by working collaboratively we can strengthen the Internet and accommodate a broader set of use cases to make IETF standards more relevant to the implementers and operators who put them into practice.
I personally think that we should be doing something to solve the data center use case and would like to see a separate solution from one that uses TLS. In doing so, the likelihood of TLS 1.3 being deployed as intended to protect users privacy increases, as does the security of the terminating site, and the data center operators would gain a solution fit for purpose within their closed environments. It should be noted that there is nothing that prevents the implementation strategy described in the proposal from being deployed; an RFC isn’t necessary for that to happen.
After reading lengthy email discussions, the TLS 1.3 draft and the proposal, and listening to the presentations at the WG session, it appears that there is motivation and expertise to address use cases not anticipated by TLS 1.3. Some are working toward this with the goal of draft adoption within the TLS working group with an extension based solution in which the client is aware of interception. But adoption is not guaranteed.
If this issue affects you, and you believe you can contribute, I encourage you to read through the WG mailing list archive and propose ways forward either adapting a solution with TLS, via alternate encryption and decryption solutions for use within the data center, or through improvements to applications to eliminate the desire to intercept traffic. The use case presented is important to data center operators and working with those deploying IETF protocols increases the success of those protocols being deployed as intended. The IETF doesn’t need to take action, but I’d like to encourage those with ideas to advance the thinking in this problem space.
IETF 99 is about to kick off in Prague, Czech Republic. There is lots of exciting work going on across more than 100 working groups, plus Birds-of-a-Feather (BoF) sessions, plenary talks, and other meetings. Here are a few sessions to keep an eye out for:
We have close to 200 participants signed up for the Hackathon taking place Saturday and Sunday. Around two dozen teams will be collaborating on code projects spanning the breadth of IETF protocols, from security to DNS to transports to IoT and more.
Folks are invited as always to join the Code Sprint on Saturday to work on tools for the IETF community — please join!
Sunday afternoon’s tutorial sessions will include two new technical tutorials. The TEEP tutorial will explain Trusted Execution Environments (TEE) and their associated protocol needs. The IEEE 802.1 Time-Sensitive Networking (TSN) tutorial with explain the TSN group’s work on transport of data packets with bounded low latency, low delay variation and zero congestion loss, closely related to the IETF’s Deterministic Networking working group.
While not an IETF event, the Applied Networking Research Workshop, put on by ACM, the IRTF, and ISOC, is taking place on Saturday. The workshop will provide a venue for discussing emerging results in applied networking research related to measurements, transport, implementation and operational issues, and internet health metrics. (Registration required.)
Those interested in 5G may want to attend the NETSLICING BoF, which is looking at isolation of resources and virtual network functions to support a variety of services. There will also be a plenary lunch time panel on Tuesday about 3GPP and IETF collaboration on 5G in Congress Hall III.
Other BoFs during the week: BANANA, focused on developing solution(s) to support dynamic path selection on a per-packet basis in networks that have more than one point of attachment to the Internet; IDEAS, which is aiming to standardize a framework that provides identity-based services that can be used by any identifier-location separation protocol; and IASA 2.0 where the community discussion about administrative re-arrangements for the IETF continues. Also in the realm of new work proposals, the IPPM working group will be discussing a charter update to allow the WG to take on work related to in-situ OAM.
We continue to see high interest in ongoing work related to data modeling, QUIC, and security. Catch the OPSAWG session for some discussion about managing the development and use of YANG models and the joint CCAMP/MPLS/PCE/TEAS session focused exclusively on YANG models, among other sessions. The QUIC WG will meet jointly with the HTTPBIS working group to discuss interaction between QUIC and HTTP, in addition to two meeting slots on its own. In the security area, both the TLS and ACME working groups are close to finalizing several core deliverables, and the SAAG meeting will feature a talk on post-quantum crypto.
We wouldn’t be able to hold IETF meetings without the support of our sponsors. Big thanks to IETF 99 hosts Comcast NBCUniversal and CZ.NIC! And to all of our sponsors for the meeting.
Wishing everyone a productive and enjoyable meeting!