These are the minutes of the CANMOD BOF session held from 09:00 to 11:30 on Wednesday, March 12th, 2008 in Philadelphia, Pennsylvania, at the 71st IETF meeting. They were assembled by Randy Presuhn from detailed notes graciously provided by Andy Bierman and Leif Johansson. As a consequence of the assembly process, there may be some duplication or mis-ordering of the "he said, she said" material. The most important part of these minutes is at the very end. The proposed agenda was adopted for the meeting. The first presentation was from the "Kalua" team. The presentation was interrupted by the chair with a request to focus on that team's view of the requirements, rather than the technical details of their proposal. The stated focus was on the immediate needs for supporting NETCONF get-config, edit-config, and notifications. Kalua is an XML-based language. Bernd Linowski made a point that this helps make the language easy to learn, and doesn't require new tools. It also avoids a technology gap between the protocol and its content. Eric Rescorla asked for clarification whether this was XSD and the answer was that it wasn't, that Kalua is a new XML-based language. This was further clarified by describing it as a new language encoded in XML. The class concept was explicitly stated as a requirement. In this context, the modeling of entity types with classes was described as providing a cohesive package of metadata that defines the data's structure. Eric Rescorla then asked what a class is. Bernd Linowski explained. There was talk of the need to relate instances of classes with relationships between classes. Leif Johansson asked whether Kalua had an explicit metamodel. Related issues were the support of inheritance for feature reuse, and for specifying extended types. It was explained that Kalua supports only single inheritance, not multiple inheritance. Calculated relationships (those based on configuration data values) were introduced as a requirement. An example of these would be indexes that refer to objects in other tables. David Oran asked whether operations are part of the class and are subject to inheritance. Bernd Linowski said they should be, but that they're still working on details. Eric Rescorla asked how powerful the language for operations might be, and whether it would be Turing complete. The response was that there is only a need to identify operation, and specify their semantics. The question was rephrased in terms of whether the definition consisted of inputs and outputs with descriptions or a complete program. The answer was that it was the former. Releases were introduced as a requirement. Though it sounds like versioning, in response to questions it was described as a way to separate the version of the model from versioning of the modeled object. As applied here, versioning applies to data models, releases apply to the network elements themselves. It was then asked whether these relationships can function across modules. It was answered that one can reference different releases in a module's imports, so specific versions can build up a release. Another requirement was that annotations should be type-safe and it should be possible to restrict to the correct type. These could add additional data model elements. It is necessary to specify where the annotations can be applied, and whether they are mandatory or optional. David Partain asked whether the scope of the effort was limited to NETCONF or was intended to be IETF-wide. It was answered that NETCONF was the initial focus, but a broader applicability could be possible later. Bert Wijnen returned to the question of why the language was XML based, stating it seemed to be requirements formulated in the form of a solution. Rohan Mahy asked which requirements from the Kalua effort are different from those arrived at by the RCDML design team, commenting that that Kalua material seemed more solution-oriented, rather than a requirements statement. Eric Rescorla again asked what it is about XML that is so important, and what it means to be an XML language. As an example, RelaxNG/compact could map each element to a token. What then does XML-based really mean? This became a discussion about whether one looks at the problem from a metamodel or a syntactic perspective. Dave Perkins asked from the jabber room about version handling / negotiation. Interesting cases include an old manager talking to a new agent, maintaining backwards compatibility, and determining what release a particular bit of configuration data maps to. The next major topic of the meeting was a review of highlights of the RCDML design team's requirements document. The chair asked the team contributors to highlight important requirements. Eric Rescorla commented that there seem to be two kinds of requirements: obvious and agreed and non-trivial and non-agreed upon. Randy Presuhn commented on this from the point of view of experience of similar work. Dan Romascanu comment that there are many agreed upon requirements and many left out, with no indication of priority. Sharon Chisholm stated that without considering priority, it is easier to reject a requirement. Discussion about this follows - are the requirements stable? Randy Presuhn gives the example of the "minimize transition pain from SMI" requirement which isn't agreed but which in his opinion is very easy to satisfy (for all of the proposals). Part of the difficulty is the danger that some might interpret "minimizing translation pain" as a requirement for "compatibility", which comes at a potentially much higher cost. Discussion about the nature of complexity, machine readability and human readability followed. Eric Rescorla asked about how to make a discussion if one can't differentiate the proposals. David Partain responded that his group selected the important requirements to address as an overview, and that the design team members were by and large pretty close to each other. Randy Presuhn, the RCDML design team chair, confirmed this. David Partain urges a focus on readability. Dave Oran responded that this excludes UML. Randy Presuhn said there were several examples of requirements that had required lots of discussion but where the design team still reached agreement. Rohan Mahy went on to list some important requirements that were not agreed. Bernd Linowski said readability should be considered one part of usability and then there are other considerations. Emile from France telecom asked why SNMP/SMI isn't on the table, that operators won't find a change easy to implement, that consideration of the transition from SMI is very important. Andy Bierman asked whether the requirements are gospel. Randy Presuhn, RCDML design team chair, responded with an emphatic "no." Andy Bierman asked further whether the community could override the design teams by consensus? Randy Presuhn (both as design team chair and as BOF chair) responded with "of course", continuing that the design team never made any claims to be speaking for the community as a whole. A question was posed about considerations for access control. Randy Presuhn responded that the design team did not reach agreement. David Partain characterized the agreed/not-agreed status used in the design team's draft. The design team didn't apply rough consensus. Eric Rescorla said that it was good that "Agreed" items had unanimous agreement among the design team members. Dan Romascanu (AD) asked which requirements had had rough consensus. The response was that the team had not tracked partial agreement at that level. The discussion moved on to metamodels and what a metamodel is. The metamodel says what you think an object looks like, and describes how things like inheritance and so on work. David Partain observed that in the design team there was a good dialogue between class based and other approaches. Bert Wijnen asked about the extent to which re-use of existing standards should be a factor in making a decision. Sharon Chisholm responded that the designed team was roughly 50/50 on the issue of re-using existing standard technology, and that metamodel issues were not the focus of the design team. Chris Newman (Applications Area Director) said there is a mess of multiple modeling languages, and that the applications area is consequently concerned about the introduction of even more, that design of modeling languages is hard and we should reuse languages when appropriate. He outlined his criteria for evaluating proposals at the IESG, noting that the semantics of the data modeling language should "align" with those of existing languages. Martin Storch commented that the metamodel stops at representing the data model, and should really just be an information model. Andy Bierman asked how working groups might extend existing models based on MIB modules. There was no clear answer. Suggestions included using SMI instead. Follow-up questions asked about core infrastructure like interfaces table or entity MIB. Other questions raised included ones on entity relationships, and determining the right mix of machine-readable semantics and description clause semantics. Eric Rescorla suggested that solution selection and differentiation will be based on not-agreed requirements, and asked how we could proceed without agreement on requirements. Dan Romascanu responded that this BOF was for the discussion of requirements, and that there would be another session for the discussion of solutions. David Partain noted that the design team had a lot of agreement, and that the members were not that far apart on most issues. Model readability was the most important in his estimation. This was followed by discussion of the question of what was meant by readability. Randy Presuhn gave the example of coming to a common understanding of how defaults might work as an example of where the design team was able to reach agreement on a difficult issue. Rohan Mahy noted that for non-agreed requirements, there were differences in degree. He gave examples including referential integrity, models for defaults, backwards/forwards compatibility, and RPC operations. Bernd Linowski returned to the question of readability. In his understanding, "readability" means "usability", which in his estimation increases the emphasis on the use of existing tools. Sharon Chisholm gave a presentation on the use of XSD to define NETCONF content. She illustrated who the requirement to define application errors could be addressed by appErrors. Randy Presuhn asked whether status (obsolete, deprecated) was considered part of the model or part of the conformance material. Bert Wijnen asked how can existing tools handle the semantics of the new extensions? Do tools need to be updated to support these new features? Sharon answered "yes". The discussion the dove into the question of what is gained by reusing standard tools if they need extensions before they can be productively used for NETCONF. The response was that even without extensions, some constraints can be checked without new tools. The presentation concluded with the XSD for the DHCP example. Alex Clemm presented his team's take on the requirements, and their response. Their draft includes a new syntax (defined in XSD) to define relationships, objects, and managed resources operations supported. They consider the definition reuse requirement to be very important, so they adopted a component and inheritance based approach. They made a distinction between reusable library definitions, which might be standardized, and implementation definitions, which might be specific to a particular version of a product. The resource-oriented new language is used to generate XSD. Balazs Lengyel presentation his team's work. He said they had not identified any new requirements, and saw no need for an additional requirements gathering phase. They see ease of use as the top priority, first for operators, then readers of specifications, and finally for tool makers, with a special emphasis on the need to understand the data model. He also supported the requirement to fully support the NETCONF protocol, including RPCs and notifications as well as data. Some issues included mapping high-level features to XML, and connecting a NETCONF RPC to its reply. Eric Rescorla asked what was meant by "RPC" in this context. Phil Shafer responded that it was effectively the signature for a method, specifying inputs and outputs. Another issue raised was how to extend models which had been designed without explicit consideration of extensibility, and without needing to modify the original specification. Augmentations go into a different namespace. Referential integrity was touched on, allowing constraints like 'must' for relationships between nodes in an instance document, as well as keyrefs for relationships. There was some discussion of the characterization of data (default, config, unique, mandatory) as well as the use of formats optimized for representing semantics or syntax. Sharon Chisholm asked why requirements like "notification-get" were not addressed. The response was that these introduced too much complexity into the language. Martin Storch gave the presentation on the Kalua proposal. He used the DHCP example to show how it met the RCDML requirements. He showed the mechanism for imports, mentioning releases. Information in models is organized into classes and attributes, and relationships like containment can be defined. Typed extensions are permitted. An interesting feature is how a standard base type can be augmented with vendor data, creating a new module-specific version of the object. A vendor chipset extension to the entity MIB was show as an example. Building on this example, a class definition in a different module could declare entPhysicalEntry as the superclass. This supports the need to allow extension of anything without touching the original module. To illustrate relationships, the example was given for provisioning a service flow, showing the relationship between the subscriber and the MAC address. The ability to handle queries is built into the model. A calculated relationship is not the same as a keyref, and need not be intrusive to the model, since it does not necessarily require the referenced entity to be present. It was also noted that this approach supports extended hierarchical data, including the ability to represent things like nested directory structures. Rohan Mahy gave a presentation on how the DSDL proposal would address the requirements. He explained that DSDL is a family of schema languages that includes RelaxNG for describing the data's structure, and Schematron to provide rule-based validation. The motivations for using RelaxNG for syntax include making the language easy to read and to learn, and providing better XML support than XSD. Advantages to the use of Schematron for validation include the ability to divide the validation process into discrete phases, and to represent important semantics, including keys, referential integrity, and ensuring that data is relevant to an operation. Extensibility is an important feature of DSDL. Examples include RelaxNG's patterns to combine by using "choice" or by interleave, as well as being able to allow redefinition of an entire pattern. DSDL can meet nearly all of the RCDML design team's requirement, even ones like "deep keys". It is sufficiently powerful to support the different proposals for handling defaults, including defaults that are invariant as well as per-version defaults that potentially change with every version of a module. Goals of this work were the ability to construct data models from the bottom up, and to be able to extend models in ways not constrained by the original data model's scope. The time that had been allocated to presentation of proposals' responses to the requirements was cut short since many of the initial presentations had exceeded their time limits. Consequently, the last proposal was skipped, and the meeting went directly to the "hums". The questions were: - Are the requirements adequately understood? The sense of the room was that the requirements are adequately understood. - Is there a need for this work? The sense of the room was that there is a need for this work. - Is there sufficient agreement on the requirements to permit progress? This was also the sense of the room. - Should an IETF working group be formed? The sense of the room was that a working group to develop a Data Modeling Language suitable for NETCONF should be formed. - Would additional time spent on requirements gathering and analysis be well-spent? The sense of the room was clearly "NO" on this question. Area Director Dan Romascanu wrapped up, agreeing with the consensus characterization. He added that this was a good discussion to have; we have a core set of requirements with consensus; and a few not-agreed requirements that need to be looked at further. In the next few weeks the process to form a WG will be figured out, and feedback from operators must be sought. Bert Wijnen responded that we have agreement from operators because the existing requirements were from operators and the new requirements are not really different. The chair thanked the participants and the meeting was adjourned.