The IETF 98 is now over. This was a successful IETF meeting in multiple ways, one of which is the IETF hackathon, two days of hacking on Saturday/Sunday.
Before delving into the hackathon results, let’s briefly review the YANG « state of the union ». As you know, YANG became the standard data modeling language of choice. Not only is it used by the IETF for specifying models, but also in many Standard Development Organizations (SDOs), consortia, and open-source projects: the IEEE, the Broadband Forum (BFF), DMTF, MEF, ITU, OpenDaylight, Open ROADM, Openconfig, sysrepo, and more. Here is a nice summary presentation “SDN and Metrics from SDOs” by Dave Ward during the Open Networking Summit 2017.
This data model-driven management applies throughout the automation « stack »: from data plane programmability with the fd.io honeycomb open-source project that exposes YANG models for VPP functionality via NETCONF and RESTCONF, to operating systems with the sysrepo open-source project (Sysrepo is a YANG-based configuration and operational datastore for Unix/Linux applications), to network control via the networking specifications in IETF/IEEE/BBF/openconfig/etc., to service models specification (YANG Data Model for L3VPN Service Delivery as an example), to controller and orchestrator and open-source projects such as openconfig), to cloud and virtualization management, without forgetting the orchestration and policy aspects (for example, the MEF Livecycle Service Orchestration).
With the rise of data modeling-driven management and the success of YANG as a key piece comes a challenge: the entire industry develops YANG models but those models must work together in order for operators to automate coherent services. And they must work together NOW. We don’t have the luxury to work in well planned sequences and all the modules. On one side, the YANG modules are constantly improved, and on the other side, they depend on each others.
In order to resolve this challenge, we’ve been working during IETF hackathons to provide the right open-source tools for the industry. Previous results after the IETF 97 have been highlighted. At this IETF, a team of around 10 people went one step further by integrating tools around a YANG catalog.
Note: I went the easy way to create this blog with « we » as opposed to stress individual achievements, but many thanks to Joe Clarke, William Lupton, Einar Nilsen-Nygaard, Gary Wu, Mahesh Jethanandani, Radek Kreji, Sudhir Rustogi, Abhi Ramesh Keshav, Carl Moberg, Rob Wilton, Miroslav Kovac, Vladimir Vassilev, and more.
What is the idea behind a YANG catalog?
From a high-level point of this YANG catalog goal is become a reference for all YANG modules available in the industry, for both YANG developers (to search on what exists already) and for operators (to discover the more mature YANG models to automate services). This YANG catalog should not only contain pointers to the YANG modules themselves, but also contains metadata related to those YANG modules: What is the module type (service model or not?) What is the maturity level? (for the IETF: is this a RFC, a working group document or an individual draft?), Is this module implemented? Who is the contact? Is there open-source code available? And we expect much more in the future. The industry starts to understand that the metadata related to those YANG modules become equally important as the YANG modules themselves. We based on work on openconfig catalog, as a starting point but we realized that we have slightly different goals.
The YANG catalog added value, compared to a normal Github repository resides in the toolchain and the additional metadata:
the demonstration of data model-driven management with open source tools: YANG Explorer (a GUI-based tool to explore modules, generate some code, and connect the devices) YANG Development Kit (a more advanced tool for code generation)
Using this one-stop set of tools, the typical flow for a YANG module designer is to validate the YANG module and to populate the YANG catalog (via an IETF drafts, via Github, or directly via the YANG catalog).
And the typical flow for a YANG module user is to search for an existing YANG module, to look up the metadata (such as maturity level, implementation, etc.), and to look up the import and include dependencies if any. Once the YANG module of choice is found, the YANG module user would browse the YANG module content, then load the YANG module in the YANG Explorer and test it by connecting to a NETCONF or RESTCONF server, and finally generate python scripts to start the automation.
This is obviously work in progress and contributions are welcome, as everything is open-source.
Practically, what have we done during this IETF 98 hackathon?
The YANG validator improved, with yanglint as an additional validator and with specific plugins that will check the correct format for the IEEE, MEF, BBF, and Cisco-specific plugins (practically, they will check the urn and prefix).
The YANG DB search laid a framework for the multi-SDO impact analysis, including a color scheme for the standard maturity levels. Below is an example.
Regarding YANG module validation, William Lupton from the Broadband Forum added all the BBF modules to the YANG catalog, including the work-in-progress ones (this means 192 modules in total). While validating those modules, we discovered some issues with some validators. Those issues are now solved and most BBF YANG modules validate correctly. Finally, William updated the YANG catalog With the BBF modules maturity level that will distinguish draft and ratified YANG modules. The example here BBF work-in-progress YANG modules depending on the IETF interface YANG module [RFC 7223].
We created a script to push, as a cronjob, all extracted YANG modules from IETF drafts into Github (we still have to integrate it in the tool chain btw). Background: it should be a priority for the IETF to separate the draft text from its « code » content, such as a YANG module, so that we don’t have to extract YANG module via tooling, and so that the YANG module could progress in Github independently of the draft version.
Another major achievement for this hackathon is the integration of the YANG Development Kit into the YANG Explorer. In this example below, you see the YANG Explorer with the GUI on the left-hand side and the generated python script.
In term of YANG module transition, we also created a script to help with the latest IETF YANG module guideline: a script to convert the -config/-state into a combined tree, to convert from a combined tree to generating an additional -state tree, and to convert from a combined tree to generating an Openconfig style tree. This will be pretty handy for the transition phase, and a good addition to the toolchain.
I have personally great hope for this YANG catalog, as it will solve a real issue in this industry. Now, we should not wait until the next hackathon to continue improving it. « Running code and consensus » I heard someone say!
Let me start with some good news. Not only we recently approved RESTCONF (right now in the RFC editor queue), but we published the IPv4 and IPv6 base routing models in RFC 8022. We all learned a lot during the long process of specifying those routing YANG models and understand many things way better now. RFC 8022 is central for standardizing many other routing YANG models, as you can see from this picture from the brand new visual dependency tool developed by Joe Clarke this week-end during the IETF Hackathon.
If you are excited to explore YANG models dependencies using this tool, give it a try with the YANG modules you’re most impatient to see done.
So what are the hot topics for this IETF 97? We continue to add flexibility to support finalization of modeling work.
First the schema mount draft, which specifies a mechanism to combine YANG modules into the schema defined in other YANG modules, is an essential building block that should be standardized soon. Many YANG modules depend on this schema mount solution.
Second, the Revised Conceptual Model for YANG Datastores draft will receive a lot of attention during this week. It focuses on a revised conceptual model of datastores based on the experience gained with the current model and addresses requirements that were not well supported in the initial model. Basically, it introduces new datastores, for accessing additional views of the intended configuration, and a new ability to obtain the operational state.
Third, focusing on finishing up key YANG models, such as key chain, key store, topologies, key routing ones (OSPF, ISIS, PIM, BGP), access-list, logical network elements, etc. The routing base models follow the config and config-state branch conventions for specifying, respectively the configuration and operational data. Models being submitted for publication request should follow this same convention. We know that operators are moving to data modeling-driven management, and waiting for standard models.
As mentioned during the last IETF meeting in Berlin, it’s important to publish the IETF YANG models within a reasonable time frame, if the IETF wants to play a key role in specifying YANG models, as opposed to only standardizing the protocols (NETCONF/RESTCONF and related push mechanisms) and related encodings (JSON, XML). As I mentioned in Berlin 3 months ago, we have maximum a year to publish the majority of those IETF YANG models. It’s time to focus and deliver.
More on the Hackathon outcomes later after the IETF, but I can already tell that this Hackathon brought new tools and implementations. This is essential as your automation is as good as your tools chain.
After the IETF 97, I plan on updating this blog with the latest achievements.
This trend is also observed in different Standard Development Organization such as the BBF (Broadband Forum), the Metro Ethernet Forum (MEF), the Institute of Electrical and Electronics Engineers (IEEE), without forgetting the opensource projects OpenDaylight and Open Config.
In January, ETSI NFV organized an Information Modelling Workshop, which brought together participants from 3GPP, ATIS, Broadband Forum, DMTF, ETSI NFV, IETF, ITU-T SG15, MEF, OASIS/TOSCA, Open Cloud Connect, ONF, OpenDaylight, OPNFV and TM-Forum. The goal was to collaborate on the information model and data model in this SDN and NFV world. I participated as OPS AD, stressing the importance of data models and of YANG as THE data modeling language. Presentations can be downloaded here.
The YANG Model Coordination Group has been spending time on the inventory of those YANG models in the industry, the tooling aspects, the help with the compilation, the training & education (NETCONF, YANG, pyang), the coordination across SDOs/opensource, the model coordination with the IETF. On the tooling front, the YANG model extraction and compilation is now integrated in the IETF draft submission tool, thanks to Henrik Levkowetz. So no more excuses for producing YANG models that don’t compile.
The industry demands open YANG data models right now. Indeed, YANG data models is the basis for the data-model driven management which allows automation. So with so many YANG data models in IETF drafts right now, why does it take so long to publish the final RFCs? Let me expand on two reasons.
The first reason is that the NETMOD and NETCONF community has been busy with some key deliverables lately.
– YANG 1.1: Based on the development and implementation experience of some of the YANG models, the YANG version 1.1 is now being finalized. This new version is a maintenance release of the YANG language, addressing ambiguities and defects in the original specification.
– RESTCONF: HTTP-based protocol that provides a programmatic interface for accessing data defined in YANG, using the datastores defined in NETCONF, and two companion documents: the YANG Patch Media Type and the YANG Module Library
– YANG Metadata: The definition of a YANG extension statement that allows for defining metadata annotations in YANG modules.
– NETCONF Call Home and RESTCONF Call Home, which enable a NETCONF or RESTCONF server to initiate a secure connection to a NETCONF or RESTCONF client respectively.
Now that those major deliverables are in their final stages, the NETMOD and NETCONF WG resources will be free to tackle the next challenge.
The second reason is the coordination of all these models. While all models are doing a great job of defining how a particular feature can be configured or monitored, they need to interact with each others. The end goal is to automate the creation of services (like the L3VPN service data model effort, which is almost complete), in a physical or virtual environment. If you consider the coordination of all the YANG data models within the IETF difficult, think twice, as the coordination is actually required for an entire industry. Before publishing the 200 YANG data models, we need to solve two important issues, which will influence the design of all standard data models:
How to structure all those models? As a practical example, how do we model the logical and virtual resource representations that may be present on a network device, such as Virtual Routing and Forwarding (VRF) instances and Virtual Switch Instances (VSIs). Should all YANG data models contain a logical network element container, just in case a router supports a VRF or VSI? On that front, the NETMOD WG is currently working on “mount” solution, a mechanism to combine YANG modules into the schema defined in other YANG modules. This mechanism would allow a simplification of the device model, particularly for “lower-end” devices which are unlikely to support multiple network instances or logical network elements.
Once those two issues are resolved, this will for sure open the gate to publish all these much-needed models.
I’m not only a strong believer in data modeling driven management, but a strong believer in standard data models. The standard aspect, based on the consensus based approach, requires some time, but this is the price to pay for standard-based automation.
The first flow-related BoF (birds of a feather) took place in London in summer 2001 during the IETF meeting 51. A few months later, the IP Flow Information Export (IPFIX) working group (WG) was created, with the following goal in its charter: “This group will select a protocol by which IP flow information can be transferred in a timely fashion from an “exporter” to a collection station or stations and define an architecture which employs it. The protocol must run over an IETF approved congestion-aware transport protocol such as TCP or SCTP”. The charter planned for three deliverables: the requirements, the architecture, and the data model. At that time, I was told the intent was to standardize NetFlow, a proprietary implementation, which was already deployed in operator networks.
And so, it started.
The WG debated for a long time on the requirements for the future IPFIX protocol selection, and hence the IPFIX architecture. There were five candidate protocols, with different capabilities, to select from, and each candidate proponents were obviously pushing their own protocol.
From there, the chairs decided that the WG should classify all requirements as “must”, “should”, “may”, and “don’t care”. The “Requirements for IPFIX”, RFC3917 documented this outcome. An independent team, in charge of evaluating the different protocols in light of documented requirements concluded that the goals of the IPFIX WG charter would best be served by starting with NetFlow v9, documented in the mean time in the informational RFC 3954.
By that time, 3 years have passed.
The next couple of years were dedicated to IPFIX protocol specifications. According to my recollection, the WG spend one year or maybe one year and half on transport-related discussions: should we use TCP or SCTP as the congestion-aware transport protocol … while most customers only cared about UDP when the flow export collection is exclusively within their management domain … and where the distributed function of forwarding ASICs complicate congestion aware transport-requirements.
The final specifications compromise on: “SCTP [RFC4960] using the PR-SCTP extension specified in [RFC3758] MUST be implemented by all compliant implementations. UDP [UDP] MAY also be implemented by compliant implementations. TCP [TCP] MAY also be implemented by compliant implementations.”
The IPFIX protocol (RFC 5101) and the IPFIX information model (RFC 5102) were finally published in January 2008 as Proposed Standards. In the end, IPFIX is an improved NetFlow v9 protocol with extra features and requirements such as transport, string variable length encoding, security, or template withdrawal message.
The IPFIX WG closed in 2015, with the following results:
– the IPFIX protocol and information model, respectively RFC 7011 and RFC 7012, published as Internet Standards
– a series of RFCs regarding IPFIX mediation functions (from a subsequent charter)
– almost 30 IPFIX RFCs in total (architecture, protocol extensions, implementation guidelines, applicability, MIB modules, YANG module, etc…)
– the IPFIX community worked on PSAMP (Packet SAMPling), which selected IPFIX to export packet sampling information, and produced four RFCs.
What are the lessons learned from those 13 years of standardization?
– 7 years (or 13 years depending if you consider the Proposed Standards, or the IPFIX WG closure) of standardization is way too long, and is inadequate today, at the age of quick(er) opensource developments. This IESG (Internet Engineering Steering Group) discussed this issue during a retreat, recognized the issue, and stressed that “WGs should have solution work from day 1”, as explained by the IETF chair in his report at IETF 90. So basically, let’s not spend precious WG time on use cases and requirements, unless we have to.
– If the intent behind the charter is to standardize a specific solution (citing the OPS AD at that time, “the intent was to standardize NetFlow”), then the IESG and the charter should be clear about it. For example,”The WG should consider <draft-bla> as a starting point.”
– <Area Director hat on> Now that I’m part of the IESG, I can tell: don’t fight the IESG regarding transport protocols. If the protocol will ever run over the Internet, it must run a congestion-aware transport. Full stop. </Area Director hat on>.
<Benoit hat on>In the end, the operators will do what they want anyway, and request what they need from equipment vendors.</ Benoit hat on>
Maybe the only important question is: IPFIX is a success?
– From a specification point of view, yes. Granted, should we start from scratch today, we would change a few things, reflecting on years of experience.
– From an implementation point of view, yes.
– From a deployment point of view, not quite yet but getting there. Indeed it took so long for the IPFIX to standardize that the NetFlow v9 implementations improved along the years. The world will see more IPFIX deployments when operators will require IPFIX specific features … And there are not many SCTP IPFIX requests right now. 😉
In conclusion, we can say that IPFIX is a success, but the world has changed a lot since 2001, lessons have been learned, and today we approach standards in the IETF differently.
Regards, Benoit (Operations and Management Area Director)