idnits 2.17.1 draft-clemm-netmod-mount-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 637: '... A mountpoint MUST be contained unde...' RFC 2119 keyword, line 645: '...the mount target SHOULD refer to a con...' RFC 2119 keyword, line 651: '...tively leaf-list SHOULD be mounted ins...' RFC 2119 keyword, line 656: '... mountpoints MUST NOT introduce circ...' RFC 2119 keyword, line 657: '...ounted datastore MUST NOT contain a mo...' (20 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1424 has weird spacing: '...ting-ne netw...' == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 21, 2016) is 2952 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 3768 (Obsoleted by RFC 5798) ** Obsolete normative reference: RFC 6536 (Obsoleted by RFC 8341) ** Obsolete normative reference: RFC 7223 (Obsoleted by RFC 8343) -- No information found for draft-I-D - is the name correct? == Outdated reference: A later version (-09) exists of draft-ietf-i2rs-pub-sub-requirements-05 -- No information found for draft-ietf-netconf-restonf - is the name correct? == Outdated reference: A later version (-25) exists of draft-ietf-netconf-yang-push-02 Summary: 5 errors (**), 0 flaws (~~), 5 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clemm 3 Internet-Draft J. Medved 4 Intended status: Experimental E. Voit 5 Expires: September 22, 2016 Cisco Systems 6 March 21, 2016 8 Mounting YANG-Defined Information from Remote Datastores 9 draft-clemm-netmod-mount-04.txt 11 Abstract 13 This document introduces capabilities that allow YANG datastores to 14 reference and incorporate information from remote datastores. This 15 is accomplished by extending YANG with the ability to define mount 16 points that reference data nodes in another YANG subtree, by 17 subsequently allowing those data nodes to be accessed by client 18 applications as if part of an alternative data hierarchy, and by 19 providing the necessary means to manage and administer those mount 20 points. Two flavors are defined: Alias-Mount allows to mount local 21 subtrees, while Peer-Mount allows subtrees to reside on and be 22 authoritatively owned by a remote server. YANG-Mount facilitates the 23 development of applications that need to access data that transcends 24 individual network devices while improving network-wide object 25 consistency, or that require an aliasing capability to be able to 26 create overlay structures for YANG data. 28 Status of This Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on September 22, 2016. 45 Copyright Notice 47 Copyright (c) 2016 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 75 1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 3 76 1.2. Examples . . . . . . . . . . . . . . . . . . . . . . . . 5 77 2. Definitions and Acronyms . . . . . . . . . . . . . . . . . . 7 78 3. Example scenarios . . . . . . . . . . . . . . . . . . . . . . 8 79 3.1. Network controller view . . . . . . . . . . . . . . . . . 8 80 3.2. Consistent network configuration . . . . . . . . . . . . 10 81 4. Operating on mounted data . . . . . . . . . . . . . . . . . . 11 82 4.1. General principles . . . . . . . . . . . . . . . . . . . 12 83 4.2. Data retrieval . . . . . . . . . . . . . . . . . . . . . 12 84 4.3. Other operations . . . . . . . . . . . . . . . . . . . . 13 85 4.4. Other considerations . . . . . . . . . . . . . . . . . . 13 86 5. Data model structure . . . . . . . . . . . . . . . . . . . . 14 87 5.1. YANG mountpoint extensions . . . . . . . . . . . . . . . 14 88 5.2. YANG structure diagrams . . . . . . . . . . . . . . . . . 15 89 5.3. Mountpoint management . . . . . . . . . . . . . . . . . . 15 90 5.4. Caching . . . . . . . . . . . . . . . . . . . . . . . . . 17 91 5.5. Other considerations . . . . . . . . . . . . . . . . . . 18 92 5.5.1. Authorization . . . . . . . . . . . . . . . . . . . . 18 93 5.5.2. Datastore qualification . . . . . . . . . . . . . . . 18 94 5.5.3. Mount cascades . . . . . . . . . . . . . . . . . . . 19 95 5.5.4. Implementation considerations . . . . . . . . . . . . 19 96 5.5.5. Modeling best practices . . . . . . . . . . . . . . . 20 97 6. Datastore mountpoint YANG module . . . . . . . . . . . . . . 20 98 7. Security Considerations . . . . . . . . . . . . . . . . . . . 28 99 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 29 100 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 29 101 9.1. Normative References . . . . . . . . . . . . . . . . . . 29 102 9.2. Informative References . . . . . . . . . . . . . . . . . 30 103 Appendix A. Example . . . . . . . . . . . . . . . . . . . . . . 31 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 35 106 1. Introduction 108 1.1. Overview 110 This document introduces a new capability that allows YANG datastores 111 [RFC6020] to incorporate and reference information from other YANG 112 subtrees. The capability allows a client application to retrieve and 113 have visibility of that YANG data as part of an alternative 114 structure. This is provided by introducing a mountpoint concept. 115 This concept allows to declare a YANG data node in a primary 116 datastore to serve as a "mount point" under which a subtree with YANG 117 data can be mounted. This way, data nodes from another subtree can 118 be inserted into an alternative data hierarchy, arranged below local 119 data nodes. To the user, this provides visibility to data from other 120 subtrees, rendered in a way that makes it appear largely as if it 121 were an integral part of the datastore. This enables users to 122 retrieve local "native" as well as mounted data in integrated 123 fashion, using e.g. Netconf [RFC6241] or Restconf 124 [I-D.ietf-netconf-restconf] data retrieval primitives. The concept 125 is reminiscent of concepts in a Network File System that allows to 126 mount remote folders and make them appear as if they were contained 127 in the local file system of the user's machine. 129 Two variants of YANG-Mount are introduced, which build on one 130 another: 132 o Alias-Mount allows mountpoints to reference a local YANG subtree 133 residing on the same server. It provides effectively an aliasing 134 capability, allowing for an alternative hierarchy and path for the 135 same YANG data. 137 o Peer-Mount allows mountpoints to reference a remote YANG subtree, 138 residing on a different server. It can be thought of as an 139 extension to Alias-Mount, in which a remote server can be 140 specified. Peer-Mount allows a server to effectively provide a 141 federated datastore, including YANG data from across the network. 143 In each case, mounted data is authoritatively owned by the server 144 that it is a part of. Validation of integrity constraints apply to 145 the authoritative copy; mounting merely provides a different view of 146 the same data. It does not impose additional constraints on that 147 same data; however, mounted data may be referred to from other data 148 nodes. The mountpoint concept applies in principle to operations 149 beyond data retrieval, i.e. to configuration, RPCs, and 150 notifications. However, support for such operations involves 151 additional considerations, for example if support for configuration 152 transactions and locking (which might now apply across the network) 153 were to be provided. While it is conceivable that additional 154 capabilities for operations on mounted information are introduced at 155 some point in time, their specification is beyond the scope of this 156 specification. 158 YANG does provide means by which modules that have been separately 159 defined can reference and augment one another. YANG also does 160 provide means to specify data nodes that reference other data nodes. 161 However, all the data is assumed to be instantiated as part of the 162 same datastore, for example a datastore provided through a NETCONF 163 server. Existing YANG mechanisms do not account for the possibility 164 that some information that needs to be referred not only resides in a 165 different subtree of the same datastore, or was defined in a separate 166 module that is also instantiated in the same datastore, but that is 167 genuinely part of a different datastore that is provided by a 168 different server. 170 The ability to mount information from local and remote datastores is 171 new and not covered by existing YANG mechanisms. Until now, 172 management information provided in a datastore has been intrinsically 173 tied to the same server and to a single data hierarchy. In contrast, 174 the capability introduced in this specification allows the server to 175 render alternative data hierarchies, and to represent information 176 from remote systems as if it were its own and contained in its own 177 local data hierarchy. 179 The capability of allowing the mounting of information from other 180 subtrees is accomplished by a set of YANG extensions that allow to 181 define such mount points. For this purpose, a new YANG module is 182 introduced. The module defines the YANG extensions, as well as a 183 data model that can be used to manage the mountpoints and mounting 184 process itself. Only the mounting module and its server (i.e. the 185 "receivers" or "consumers" of the mounted information) need to be 186 aware of the concepts introduced here. Mounting is transparent to 187 the "providers" of the mounted information and models that are being 188 mounted; any data nodes or subtrees within any YANG model can be 189 mounted. 191 Alias-Mount and Peer-Mount build on top of each other. It is 192 possible for a server to support Alias-Mount but not Peer-Mount. In 193 essence, Peer-Mount requires an additional parameter that is used to 194 refer to the target system. This parameter does not need to be 195 supported if only Alias-Mount is provided. 197 Finally, it should be mentioned that Alias-Mount and Peer-Mount are 198 not to be confused with Schema-Mount 199 [I-D.bjorklund-netmod-structural-mount]. Schema Mount allows to 200 instantiate an existing model definition underneath a mount point, 201 not reference a set of YANG data that has already been instantiated 202 somewhere else. In that sense, Schema-Mount resembles more a 203 "grouping" concept that allows to reuse an existing definition in a 204 new context, as opposed to referencing and incorporating existing 205 instance information into a new context. 207 1.2. Examples 209 The requirements for mounting YANG subtrees from remote datastores, 210 as long as a set of associated use cases, are documented in 211 [I-D.voit-netmod-yang-mount-requirements]. The ability to mount data 212 from remote datastores is useful to address various problems that 213 several categories of applications are faced with. 215 One category of applications that can leverage this capability are 216 network controller applications that need to present a consolidated 217 view of management information in datastores across a network. 218 Controller applications are faced with the problem that in order to 219 expose information, that information needs to be part of their own 220 datastore. Today, this requires support of a corresponding YANG data 221 module. In order to expose information that concerns other network 222 elements, that information has to be replicated into the controller's 223 own datastore in the form of data nodes that may mirror but are 224 clearly distinct from corresponding data nodes in the network 225 element's datastore. In addition, in many cases, a controller needs 226 to impose its own hierarchy on the data that is different from the 227 one that was defined as part of the original module. An example for 228 this concerns interface data, both operational data (e.g. various 229 types of interface statistics) and configuration data, such as 230 defined in [RFC7223]. This data will be contained in a top-level 231 container ("interfaces", in this particular case) in a network 232 element datastore. The controller may need to provide its clients a 233 view on interface data from multiple devices under its scope of 234 control. One way of to do so would involve organizing the data in a 235 list with separate list elements for each device. However, this in 236 turn would require introduction of redundant YANG modules that 237 effectively replicate the same interface data save for differences in 238 hierarchy. 240 By directly mounting information from network element datastores, the 241 controller does not need to replicate the same information from 242 multiple datastores, nor does it need to re-define any network 243 element and system-level abstractions to be able to put them in the 244 context of network abstractions. Instead, the subtree of the remote 245 system is attached to the local mount point. Operations that need to 246 access data below the mount point are in effect transparently 247 redirected to remote system, which is the authoritative owner of the 248 data. The mounting system does not even necessarily need to be aware 249 of the specific data in the remote subtree. Optionally, caching 250 strategies can be employed in which the mounting system prefetches 251 data. 253 A second category of applications concerns decentralized networking 254 applications that require globally consistent configuration of 255 parameters. When each network element maintains its own datastore 256 with the same configurable settings, a single global change requires 257 modifying the same information in many network elements across a 258 network. In case of inconsistent configurations, network failures 259 can result that are difficult to troubleshoot. In many cases, what 260 is more desirable is the ability to configure such settings in a 261 single place, then make them available to every network element. 262 Today, this requires in general the introduction of specialized 263 servers and configuration options outside the scope of NETCONF, such 264 as RADIUS [RFC2866] or DHCP [RFC2131]. In order to address this 265 within the scope of NETCONF and YANG, the same information would have 266 to be redundantly modeled and maintained, representing operational 267 data (mirroring some remote server) on some network elements and 268 configuration data on a designated master. Either way, additional 269 complexity ensues. 271 Instead of replicating the same global parameters across different 272 datastores, the solution presented in this document allows a single 273 copy to be maintained in a subtree of single datastore that is then 274 mounted by every network element that requires awareness of these 275 parameters. The global parameters can be hosted in a controller or a 276 designated network element. This considerably simplifies the 277 management of such parameters that need to be known across elements 278 in a network and require global consistency. 280 It should be noted that for these and many other applications merely 281 having a view of the remote information is sufficient. It allows to 282 define consolidated views of information without the need for 283 replicating data and models that have already been defined, to audit 284 information, and to validate consistency of configurations across a 285 network. Only retrieval operations are required; no operations that 286 involve configuring remote data are involved. 288 2. Definitions and Acronyms 290 Data node: An instance of management information in a YANG datastore. 292 DHCP: Dynamic Host Configuration Protocol. 294 Datastore: A conceptual store of instantiated management information, 295 with individual data items represented by data nodes which are 296 arranged in hierarchical manner. 298 Datastore-push: A mechanism that allows a client to subscribe to 299 updates from a datastore, which are then automatically pushed by the 300 server to the client. 302 Data subtree: An instantiated data node and the data nodes that are 303 hierarchically contained within it. 305 Mount client: The system at which the mount point resides, into which 306 the remote subtree is mounted. 308 Mount point: A data node that receives the root node of the remote 309 datastore being mounted. 311 Mount server: The server with which the mount client communicates and 312 which provides the mount client with access to the mounted 313 information. Can be used synonymously with mount target. 315 Mount target: A remote server whose datastore is being mounted. 317 NACM: NETCONF Access Control Model 319 NETCONF: Network Configuration Protocol 321 RADIUS: Remote Authentication Dial In User Service. 323 RPC: Remote Procedure Call 325 Remote datastore: A datastore residing at a remote node. 327 URI: Uniform Resource Identifier 329 YANG: A data definition language for NETCONF 331 3. Example scenarios 333 The following example scenarios outline some of the ways in which the 334 ability to mount YANG datastores can be applied. Other mount 335 topologies can be conceived in addition to the ones presented here. 337 3.1. Network controller view 339 Network controllers can use the mounting capability to present a 340 consolidated view of management information across the network. This 341 allows network controllers to expose network-wide abstractions, such 342 as topologies or paths, multi-device abstractions, such as VRRP 343 [RFC3768], and network-element specific abstractions, such as 344 information about a network element's interfaces. 346 While an application on top of a controller could bypass the 347 controller to access network elements directly for their element- 348 specific abstractions, this would come at the expense of added 349 inconvenience for the client application. In addition, it would 350 compromise the ability to provide layered architectures in which 351 access to the network by controller applications is truly channeled 352 through the controller. 354 Without a mounting capability, a network controller would need to at 355 least conceptually replicate data from network elements to provide 356 such a view, incorporating network element information into its own 357 controller model that is separate from the network element's, 358 indicating that the information in the controller model is to be 359 populated from network elements. This can introduce issues such as 360 data inconsistency and staleness. Equally important, it would lead 361 to the need to define redundant data models: one model that is 362 implemented by the network element itself, and another model to be 363 implemented by the network controller. This leads to poor 364 maintainability, as analogous information has to be redundantly 365 defined and implemented across different data models. In general, 366 controllers cannot simply support the same modules as their network 367 elements for the same information because that information needs to 368 be put into a different context. This leads to "node"-information 369 that needs to be instantiated and indexed differently, because there 370 are multiple instances across different data stores. 372 For example, "system"-level information of a network element would 373 most naturally placed into a top-level container at that network 374 element's datastore. At the same time, the same information in the 375 context of the overall network, such as maintained by a controller, 376 might better be provided in a list. For example, the controller 377 might maintain a list with a list element for each network element, 378 underneath which the network element's system-level information is 379 contained. However, the containment structure of data nodes in a 380 module, once defined, cannot be changed. This means that in the 381 context of a network controller, a second module that repeats the 382 same system-level information would need to be defined, implemented, 383 and maintained. Any augmentations that add additional system-level 384 information to the original module will likewise need to be 385 redundantly defined, once for the "system" module, a second time for 386 the "controller" module. 388 By allowing a network controller to directly mount information from 389 network element datastores, the controller does not need to replicate 390 the same information from multiple datastores. Perhaps even more 391 importantly, the need to re-define any network element and system- 392 level abstractions just to be able to put them in the context of 393 network abstractions is avoided. In this solution, a network 394 controller's datastore mounts information from many network element 395 datastores. For example, the network controller datastore (the 396 "primary" datastore) could implement a list in which each list 397 element contains a mountpoint. Each mountpoint mounts a subtree from 398 a different network element's datastore. The data from the mounted 399 subtrees is then accessible to clients of the primary datastore using 400 the usual data retrieval operations. 402 This scenario is depicted in Figure 1. In the figure, M1 is the 403 mountpoint for the datastore in Network Element 1 and M2 is the 404 mountpoint for the datastore in Network Element 2. MDN1 is the 405 mounted data node in Network Element 1, and MDN2 is the mounted data 406 node in Network Element 2. 408 +-------------+ 409 | Network | 410 | Controller | 411 | Datastore | 412 | | 413 | +--N10 | 414 | +--N11 | 415 | +--N12 | 416 | +--M1******************************* 417 | +--M2****** * 418 | | * * 419 +-------------+ * * 420 * +---------------+ * +---------------+ 421 * | +--N1 | * | +--N5 | 422 * | +--N2 | * | +--N6 | 423 ********> +--MDN2 | *********> +--MDN1 | 424 | +--N3 | | +--N7 | 425 | +--N4 | | +--N8 | 426 | | | | 427 | Network | | Network | 428 | Element | | Element | 429 | Datastore | | Datastore | 430 +---------------+ +---------------+ 432 Figure 1: Network controller mount topology 434 3.2. Consistent network configuration 436 A second category of applications concerns decentralized networking 437 applications that require globally consistent configuration of 438 parameters that need to be known across elements in a network. 439 Today, the configuration of such parameters is generally performed on 440 a per network element basis, which is not only redundant but, more 441 importantly, error-prone. Inconsistent configurations lead to 442 erroneous network behavior that can be challenging to troubleshoot. 444 Using the ability to mount information from remote datastores opens 445 up a new possibility for managing such settings. Instead of 446 replicating the same global parameters across different datastores, a 447 single copy is maintained in a subtree of single datastore. This 448 datastore can hosted in a controller or a designated network element. 449 The subtree is subsequently mounted by every network element that 450 requires access to these parameters. 452 In many ways, this category of applications is an inverse of the 453 previous category: Whereas in the network controller case data from 454 many different datastores would be mounted into the same datastore 455 with multiple mountpoints, in this case many elements, each with 456 their own datastore, mount the same remote datastore, which is then 457 mounted by many different systems. 459 The scenario is depicted in Figure 2. In the figure, M1 is the 460 mountpoint for the Network Controller datastore in Network Element 1 461 and M2 is the mountpoint for the Network Controller datastore in 462 Network Element 2. MDN is the mounted data node in the Network 463 Controller datastore that contains the data nodes that represent the 464 shared configuration settings. (Note that there is no reason why the 465 Network Controller Datastore in this figure could not simply reside 466 on a network element itself; the division of responsibilities is a 467 logical one. 469 +---------------+ +---------------+ 470 | Network | | Network | 471 | Element | | Element | 472 | Datastore | | Datastore | 473 | | | | 474 | +--N1 | | +--N5 | 475 | | +--N2 | | | +--N6 | 476 | | +--N2 | | | +--N6 | 477 | | +--N3 | | | +--N7 | 478 | | +--N4 | | | +--N8 | 479 | | | | | | 480 | +--M1 | | +--M2 | 481 +-----*---------+ +-----*---------+ 482 * * +---------------+ 483 * * | | 484 * * | +--N10 | 485 * * | +--N11 | 486 *********************************************> +--MDN | 487 | +--N20 | 488 | +--N21 | 489 | ... | 490 | +--N22 | 491 | | 492 | Network | 493 | Controller | 494 | Datastore | 495 +---------------+ 497 Figure 2: Distributed config settings topology 499 4. Operating on mounted data 501 This section provides a rough illustration of the operations flow 502 involving mounted datastores. 504 4.1. General principles 506 The first thing that should be noted about these operations flows 507 concerns the fact that a mount client essentially constitutes a 508 special management application that interacts with a subtree to 509 render the data of that subtree as an alternative tree hierarchy. In 510 the case of Alias-Mount, both original and alternative tree are 511 maintained by the same server, which in effect provides alternative 512 paths to the same data. In the case of Peer-Mount, the mount client 513 constitutes in effect another application, with the remote system 514 remaining the authoritative owner of the data. While it is 515 conceivable that the remote system (or an application that proxies 516 for the remote system) provides certain functionality to facilitate 517 the specific needs of the mount client to make it more efficient, the 518 fact that another system decides to expose a certain "view" of that 519 data is fundamentally not the remote system's concern. 521 When a client application makes a request to a server that involves 522 data that is mounted from a remote system, the server will 523 effectively act as a proxy to the remote system on the client 524 application's behalf. It will extract from the client application 525 request the portion that involves the mounted subtree from the remote 526 system. It will strip that portion of the local context, i.e. remove 527 any local data paths and insert the data path of the mounted remote 528 subtree, as appropriate. The server will then forward the transposed 529 request to the remote system that is the authoritative owner of the 530 mounted data, acting itself as a client to the remote server. Upon 531 receiving the reply, the server will transpose the results into the 532 local context as needed, for example map the data paths into the 533 local data tree structure, and combine those results with the results 534 of the remainder portion of the original request. 536 4.2. Data retrieval 538 Data retrieval operations are the only category of operations that is 539 supported for peer-mounted information. In that case, a Netconf 540 "get" or "get-configuration" operation might be applied on a subtree 541 whose scope includes a mount point. When resolving the mount point, 542 the server issues its own "get" or "get-configuration" request 543 against the remote system's subtree that is attached to the mount 544 point. The returned information is then inserted into the data 545 structure that is in turn returned to the client that originally 546 invoked the request. 548 4.3. Other operations 550 The fact that only data retrieval operations are the only category of 551 operations that are supported for peer-mounted information does not 552 preclude other operations to be applied to datastore subtrees that 553 contain mountpoints and peer-mounted information. Peer-mounted 554 information is simply transparent to those operations. When an 555 operation is applied to a subtree which includes mountpoints, mounted 556 information is ignored for purposes of the operation. For example, 557 for a Netconf "edit-config" operation that includes a subtree with a 558 mountpoint, a server will ignore the data under the mountpoint and 559 apply the operation only to the local configuration. Mounted data is 560 "read-only" data. The server does not even need to return an error 561 message that the operation could not be applied to mounted data; the 562 mountpoint is simply ignored. 564 In principle, it is conceivable that operations other than data- 565 retrieval are applied to mounted data as well. For example, an 566 operation to edit configuration information might expect edits to be 567 applied to remote systems as part of the operation, where the edited 568 subtree involves mounted information. However, editing of 569 information and "writing through" to remote systems potentially 570 involves significant complexity, particularly if transactions and 571 locking across multiple configuration items are involved. Support 572 for such operations will require additional capabilities, 573 specification of which is beyond the scope of this specification. 575 Likewise, YANG-Mount does not extend towards RPCs that are defined as 576 part of YANG modules whose contents is being mounted. Support for 577 RPCs that involve mounted portions of the datastore, while 578 conceivable, would require introduction of an additional capability, 579 whose definition is outside the scope of this specification. 581 By the same token, YANG-Mount does not extend towards notifications. 582 It is conceivable to offer such support in the future using a 583 separate capability, definition of which is once again outside the 584 scope of this specification. 586 4.4. Other considerations 588 Since mounting of information typically involves communication with a 589 remote system, there is a possibility that the remote system will not 590 respond within a certain amount of time, that connectivity is lost, 591 or that other errors occur. Accordingly, the ability to mount 592 datastores also involves mountpoint management, which includes the 593 ability to configure timeouts, retries, and management of mountpoint 594 state (including dynamic addition removal of mountpoints). 595 Mountpoint management will be discussed in section Section 5.3. 597 It is expected that some implementations will introduce caching 598 schemes. Caching can increase performance and efficiency in certain 599 scenarios (for example, in the case of data that is frequently read 600 but that rarely changes), but increases implementation complexity. 601 Caching is not required for YANG-mount to work - in which case access 602 to mounted information is "on-demand", in which the authoritative 603 data node always gets accessed. Whether to perform caching is a 604 local implementation decision. 606 When caching is introduced, it can benefit from the ability to 607 subscribe to updates on remote data by remote servers. Requirements 608 for such a capability have been defined in 609 [I-D.ietf-i2rs-pub-sub-requirements]. Some optimizations to 610 facilitate caching support will be discussed in section Section 5.4. 612 5. Data model structure 614 5.1. YANG mountpoint extensions 616 At the center of the module is a set of YANG extensions that allow to 617 define a mountpoint. 619 o The first extension, "mountpoint", is used to declare a 620 mountpoint. The extension takes the name of the mountpoint as an 621 argument. 623 o The second extension, "subtree", serves as substatement underneath 624 a mountpoint statement. It takes an argument that defines the 625 root node of the datastore subtree that is to be mounted, 626 specified as string that contains a path expression. This 627 extension is used to define mountpoints for Alias-Mount, as well 628 as Peer-Mount. 630 o The third extension, "target", also serves as a substatement 631 underneath a mountpoint statement. It is used for Peer-Mount and 632 takes an argument that identifies the target system. The argument 633 is a reference to a data node that contains the information that 634 is needed to identify and address a remote server, such as an IP 635 address, a host name, or a URI [RFC3986]. 637 A mountpoint MUST be contained underneath a container. Future 638 revisions might allow for mountpoints to be contained underneath 639 other data nodes, such as lists, leaf-lists, and cases. However, to 640 keep things simple, at this point mounting is only allowed directly 641 underneath a container. 643 Only a single data node can be mounted at one time. While the mount 644 target could refer to any data node, it is recommended that as a best 645 practice, the mount target SHOULD refer to a container. It is 646 possible to maintain e.g. a list of mount points, with each mount 647 point each of which has a mount target an element of a remote list. 648 However, to avoid unnecessary proliferation of the number of mount 649 points and associated management overhead, when data from lists or 650 leaf-lists is to be mounted, a container containing the list 651 respectively leaf-list SHOULD be mounted instead of individual list 652 elements. 654 It is possible for a mounted datastore to contain another mountpoint, 655 thus leading to several levels of mount indirections. However, 656 mountpoints MUST NOT introduce circular dependencies. In particular, 657 a mounted datastore MUST NOT contain a mountpoint which specifies the 658 mounting datastore as a target and a subtree which contains as root 659 node a data node that in turn contains the original mountpoint. 660 Whenever a mount operation is performed, this condition mountpoint. 661 Whenever a mount operation is performed, this condition MUST be 662 validated by the mount client. 664 5.2. YANG structure diagrams 666 YANG data model structure overviews have proven very useful to convey 667 the "Big Picture". It would be useful to indicate in YANG data model 668 structure overviews the fact that a given data node serves as a 669 mountpoint. We propose for this purpose also a corresponding 670 extension to the structure representation convention. Specifically, 671 we propose to prefix the name of the mounting data node with upper- 672 case 'M'. 674 rw network 675 +-- rw nodes 676 +-- rw node [node-ID] 677 +-- rw node-ID 678 +-- M node-system-info 680 5.3. Mountpoint management 682 The YANG module contains facilities to manage the mountpoints 683 themselves. 685 For this purpose, a list of the mountpoints is introduced. Each list 686 element represents a single mountpoint. It includes an 687 identification of the mount target, i.e. the remote system hosting 688 the remote datastore and a definition of the subtree of the remote 689 data node being mounted. It also includes monitoring information 690 about current status (indicating whether the mount has been 691 successful and is operational, or whether an error condition applies 692 such as the target being unreachable or referring to an invalid 693 subtree). 695 In addition to the list of mountpoints, a set of global mount policy 696 settings allows to set parameters such as mount retries and timeouts. 698 Each mountpoint list element also contains a set of the same 699 configuration knobs, allowing administrators to override global mount 700 policies and configure mount policies on a per-mountpoint basis if 701 needed. 703 There are two ways how mounting occurs: automatic (dynamically 704 performed as part of system operation) or manually (administered by a 705 user or client application). A separate mountpoint-origin object is 706 used to distinguish between manually configured and automatically 707 populated mountpoints. 709 Whether mounting occurs automatically or needs to be manually 710 configured by a user or an application can depend on the mountpoint 711 being defined, i.e. the semantics of the model. 713 When configured automatically, mountpoint information is 714 automatically populated by the datastore that implements the 715 mountpoint. The precise mechanisms for discovering mount targets and 716 bootstrapping mount points are provided by the mount client 717 infrastructure and outside the scope of this specification. 718 Likewise, when a mountpoint should be deleted and when it should 719 merely have its mount-status indicate that the target is unreachable 720 is a system-specific implementation decision. 722 Manual mounting consists of two steps. In a first step, a mountpoint 723 is manually configured by a user or client application through 724 administrative action. Once a mountpoint has been configured, actual 725 mounting occurs through an RPCs that is defined specifically for that 726 purpose. To unmount, a separate RPC is invoked; mountpoint 727 configuration information needs to be explicitly deleted. Manual 728 mounting can also be used to override automatic mounting, for example 729 to allow an administrator to set up or remove a mountpoint. 731 It should be noted that mountpoint management does not allow users to 732 manually "extend" the model, i.e. simply add a subtree underneath 733 some arbitrary data node into a datastore, without a supporting 734 mountpoint defined in the model to support it. A mountpoint 735 definition is a formal part of the model with well-defined semantics. 736 Accordingly, mountpoint management does not allow users to 737 dynamically "extend" the data model itself. It allows users to 738 populate the datastore and mount structure within the confines of a 739 model that has been defined prior. 741 The structure of the mountpoint management data model is depicted in 742 the following figure, where brackets enclose list keys, "rw" means 743 configuration, "ro" operational state data, and "?" designates 744 optional nodes. Parantheses enclose choice and case nodes. The 745 figure does not depict all definitions; it is intended to illustrate 746 the overall structure. 748 module: ietf-mount 749 +--rw mount-server-mgmt {mount-server-mgmt}? 750 +--rw mountpoints 751 | +--rw mountpoint* [mountpoint-id] 752 | +--rw mountpoint-id string 753 | +--ro mountpoint-origin? enumeration 754 | +--rw subtree-ref subtree-ref 755 | +--rw mount-target 756 | | +--rw (target-address-type) 757 | | +--:(IP) 758 | | | +--rw target-ip? inet:ip-address 759 | | +--:(URI) 760 | | | +--rw uri? inet:uri 761 | | +--:(host-name) 762 | | | +--rw hostname? inet:host 763 | | +--:(node-ID) 764 | | | +--rw node-info-ref? subtree-ref 765 | | +--:(other) 766 | | +--rw opaque-target-ID? string 767 | +--ro mount-status? mount-status 768 | +--rw manual-mount? empty 769 | +--rw retry-timer? uint16 770 | +--rw number-of-retries? uint8 771 +--rw global-mount-policies 772 +--rw manual-mount? empty 773 +--rw retry-timer? uint16 774 +--rw number-of-retries? uint8 776 5.4. Caching 778 Under certain circumstances, it can be useful to maintain a cache of 779 remote information. Instead of accessing the remote system, requests 780 are served from a copy that is locally maintained. This is 781 particularly advantageous in cases where data is slow changing, i.e. 782 when there are many more "read" operations than changes to the 783 underlying data node, and in cases when a significant delay were 784 incurred when accessing the remote system, which might be prohibitive 785 for certain applications. Examples of such applications are 786 applications that involve real-time control loops requiring response 787 times that are measured in milliseconds. However, as data nodes that 788 are mounted from an authoritative datastore represent the "golden 789 copy", it is important that any modifications are reflected as soon 790 as they are made. 792 It is a local implementation decision of mount clients whether to 793 cache information once it has been fetched. However, in order to 794 support more powerful caching schemes, it becomes necessary for the 795 mount server to "push" information proactively. For this purpose, it 796 is useful for the mount client to subscribe for updates to the 797 mounted information at the mount server. A corresponding mechanism 798 that can be leveraged for this purpose is specified in 799 [I-D.ietf-netconf-yang-push]. 801 Note that caching large mountpoints can be expensive. Therefore 802 limiting the amount of data unnecessarily passed when mounting near 803 the top of a YANG subtree is important. For these reasons, an 804 ability to specify a particular caching strategy in conjunction with 805 mountpoints can be desirable, including the ability to exclude 806 certain nodes and subtrees from caching. According capabilities may 807 be introduced in a future version of this draft. 809 5.5. Other considerations 811 5.5.1. Authorization 813 Access to mounted information is subject to authorization rules. To 814 the mounted system, a mounting client will in general appear like any 815 other client. Authorization privileges for remote mounting clients 816 need to be specified through NACM (NETCONF Access Control Model) 817 [RFC6536]. 819 5.5.2. Datastore qualification 821 It is conceivable to differentiate between different datastores on 822 the remote server, that is, to designate the name of the actual 823 datastore to mount, e.g. "running" or "startup". However, for the 824 purposes of this spec, we assume that the datastore to be mounted is 825 generally implied. Mounted information is treated as analogous to 826 operational data; in general, this means the running or "effective" 827 datastore is the target. That said, the information which targets to 828 mount does constitute configuration and can hence be part of a 829 startup or candidate datastore. 831 It is conceivable to use mount in conjunction with ephemeral 832 datastores, to address requirements outlined in 833 [I-D.haas-i2rs-netmod-netconf-requirements]. Support for such a 834 scheme is for further study and may be included in a future revision 835 of this spec. 837 5.5.3. Mount cascades 839 It is possible for the mounted subtree to in turn contain a 840 mountpoint. However, circular mount relationships MUST NOT be 841 introduced. For this reason, a mounted subtree MUST NOT contain a 842 mountpoint that refers back to the mounting system with a mount 843 target that directly or indirectly contains the originating 844 mountpoint. As part of a mount operation, the mount points of the 845 mounted system need to be checked accordingly. 847 5.5.4. Implementation considerations 849 Implementation specifics are outside the scope of this specification. 850 That said, the following considerations apply: 852 Systems that wish to mount information from remote datastores need to 853 implement a mount client. The mount client communicates with a 854 remote system to access the remote datastore. To do so, there are 855 several options: 857 o The mount client acts as a NETCONF client to a remote system. 858 Alternatively, another interface to the remote system can be used, 859 such as a REST API using JSON encodings, as specified in 860 [I-D.ietf-netconf-restconf]. Either way, to the remote system, 861 the mount client constitutes essentially a client application like 862 any other. The mount client in effect IS a special kind of client 863 application. 865 o The mount client communicates with a remote mount server through a 866 separate protocol. The mount server is deployed on the same 867 system as the remote NETCONF datastore and interacts with it 868 through a set of local APIs. 870 o The mount client communicates with a remote mount server that acts 871 as a NETCONF client proxy to a remote system, on the client's 872 behalf. The communication between mount client and remote mount 873 server might involve a separate protocol, which is translated into 874 NETCONF operations by the remote mount server. 876 It is the responsibility of the mount client to manage the 877 association with the target system, e.g. validate it is still 878 reachable by maintaining a permanent association, perform 879 reachability checks in case of a connectionless transport, etc. 881 It is the responsibility of the mount client to manage the 882 mountpoints. This means that the mount client needs to populate the 883 mountpoint monitoring information (e.g. keep mount-status up to data 884 and determine in the case of automatic mounting when to add and 885 remove mountpoint configuration). In the case of automatic mounting, 886 the mount client also interacts with the mountpoint discovery and 887 bootstrap process. 889 The mount client needs to also participate in servicing datastore 890 operations involving mounted information. An operation requested 891 involving a mountpoint is relayed by the mounting system's 892 infrastructure to the mount client. For example, a request to 893 retrieve information from a datastore leads to an invocation of an 894 internal mount client API when a mount point is reached. The mount 895 client then relays a corresponding operation to the remote datastore. 896 It subsequently relays the result along with any responses back to 897 the invoking infrastructure, which then merges the result (e.g. a 898 retrieved subtree with the rest of the information that was 899 retrieved) as needed. Relaying the result may involve the need to 900 transpose error response codes in certain corner cases, e.g. when 901 mounted information could not be reached due to loss of connectivity 902 with the remote server, or when a configuration request failed due to 903 validation error. 905 5.5.5. Modeling best practices 907 There is a certain amount of overhead associated with each mount 908 point. The mount point needs to be managed and state maintained. 909 Data subscriptions need to be maintained. Requests including mounted 910 subtrees need to be decomposed and responses from multiple systems 911 combined. 913 For those reasons, as a general best practice, models that make use 914 of mount points SHOULD be defined in a way that minimizes the number 915 of mountpoints required. Finely granular mounts, in which multiple 916 mountpoints are maintained with the same remote system, each 917 containing only very small data subtrees, SHOULD be avoided. For 918 example, lists SHOULD only contain mountpoints when individual list 919 elements are associated with different remote systems. To mount data 920 from lists in remote datastores, a container node that contains all 921 list elements SHOULD be mounted instead of mounting each list element 922 individually. Likewise, instead of having mount points refer to 923 nodes contained underneath choices, a mountpoint should refer to a 924 container of the choice. 926 6. Datastore mountpoint YANG module 928 929 file "ietf-mount@2016-03-21.yang" 930 module ietf-mount { 931 namespace "urn:ietf:params:xml:ns:yang:ietf-mount"; 932 prefix mnt; 933 import ietf-inet-types { 934 prefix inet; 935 } 937 organization 938 "IETF NETMOD (NETCONF Data Modeling Language) Working Group"; 939 contact 940 "WG Web: 941 WG List: 943 WG Chair: Kent Watsen 944 946 WG Chair: Lou Berger 947 949 WG Chair: Juergen Schoenwaelder 950 952 Editor: Alexander Clemm 953 955 Editor: Jan Medved 956 958 Editor: Eric Voit 959 "; 960 description 961 "This module provides a set of YANG extensions and definitions 962 that can be used to mount information from remote datastores."; 964 revision 2016-03-21 { 965 description 966 "Initial revision."; 967 reference 968 "draft-clemm-netmod-mount-04.txt"; 969 } 971 extension mountpoint { 972 argument name; 973 description 974 "This YANG extension is used to mount data from another 975 subtree in place of the node under which this YANG extension 976 statement is used. 978 This extension takes one argument which specifies the name 979 of the mountpoint. 981 This extension can occur as a substatement underneath a 982 container statement, a list statement, or a case statement. 983 As a best practice, it SHOULD occur as statement only 984 underneath a container statement, but it MAY also occur 985 underneath a list or a case statement. 987 The extension can take two parameters, target and subtree, 988 each defined as their own YANG extensions. 990 For Alias-Mount, a mountpoint statement MUST contain a 991 subtree statement for the mountpoint definition to be valid. 992 For Peer-Mount, a mountpoint statement MUST contain both a 993 target and a subtree substatement for the mountpoint 994 definition to be valid. 996 The subtree SHOULD be specified in terms of a data node of 997 type 'mnt:subtree-ref'. The targeted data node MUST 998 represent a container. 1000 The target system MAY be specified in terms of a data node 1001 that uses the grouping 'mnt:mount-target'. However, it 1002 can be specified also in terms of any other data node that 1003 contains sufficient information to address the mount target, 1004 such as an IP address, a host name, or a URI. 1006 It is possible for the mounted subtree to in turn contain a 1007 mountpoint. However, circular mount relationships MUST NOT 1008 be introduced. For this reason, a mounted subtree MUST NOT 1009 contain a mountpoint that refers back to the mounting system 1010 with a mount target that directly or indirectly contains the 1011 originating mountpoint."; 1012 } 1014 extension target { 1015 argument target-name; 1016 description 1017 "This YANG extension is used to perform a Peer-Mount. 1018 It is used to specify a remote target system from which to 1019 mount a datastore subtree. This YANG 1020 extension takes one argument which specifies the remote 1021 system. In general, this argument will contain the name of 1022 a data node that contains the remote system information. It 1023 is recommended that the reference data node uses the 1024 mount-target grouping that is defined further below in this 1025 module. 1027 This YANG extension can occur only as a substatement below 1028 a mountpoint statement. It MUST NOT occur as a substatement 1029 below any other YANG statement."; 1030 } 1032 extension subtree { 1033 argument subtree-path; 1034 description 1035 "This YANG extension is used to specify a subtree in a 1036 datastore that is to be mounted. This YANG extension takes 1037 one argument which specifies the path to the root of the 1038 subtree. The root of the subtree SHOULD represent an 1039 instance of a YANG container. However, it MAY represent 1040 also another data node. 1042 This YANG extension can occur only as a substatement below 1043 a mountpoint statement. It MUST NOT occur as a substatement 1044 below any other YANG statement."; 1045 } 1047 feature mount-server-mgmt { 1048 description 1049 "Provide additional capabilities to manage remote mount 1050 points"; 1051 } 1053 typedef mount-status { 1054 type enumeration { 1055 enum "ok" { 1056 description 1057 "Mounted"; 1058 } 1059 enum "no-target" { 1060 description 1061 "The argument of the mountpoint does not define a 1062 target system"; 1063 } 1064 enum "no-subtree" { 1065 description 1066 "The argument of the mountpoint does not define a 1067 root of a subtree"; 1068 } 1069 enum "target-unreachable" { 1070 description 1071 "The specified target system is currently 1072 unreachable"; 1073 } 1074 enum "mount-failure" { 1075 description 1076 "Any other mount failure"; 1078 } 1079 enum "unmounted" { 1080 description 1081 "The specified mountpoint has been unmounted as the 1082 result of a management operation"; 1083 } 1084 } 1085 description 1086 "This type is used to represent the status of a 1087 mountpoint."; 1088 } 1090 typedef subtree-ref { 1091 type string; 1092 description 1093 "This string specifies a path to a datanode. It corresponds 1094 to the path substatement of a leafref type statement. Its 1095 syntax needs to conform to the corresponding subset of the 1096 XPath abbreviated syntax. Contrary to a leafref type, 1097 subtree-ref allows to refer to a node in a remote datastore. 1098 Also, a subtree-ref refers only to a single node, not a list 1099 of nodes."; 1100 } 1102 grouping mount-monitor { 1103 description 1104 "This grouping contains data nodes that indicate the 1105 current status of a mount point."; 1106 leaf mount-status { 1107 type mount-status; 1108 config false; 1109 description 1110 "Indicates whether a mountpoint has been successfully 1111 mounted or whether some kind of fault condition is 1112 present."; 1113 } 1114 } 1116 grouping mount-target { 1117 description 1118 "This grouping contains data nodes that can be used to 1119 identify a remote system from which to mount a datastore 1120 subtree."; 1121 container mount-target { 1122 description 1123 "A container is used to keep mount target information 1124 together."; 1125 choice target-address-type { 1126 mandatory true; 1127 description 1128 "Allows to identify mount target in different ways, 1129 i.e. using different types of addresses."; 1130 case IP { 1131 leaf target-ip { 1132 type inet:ip-address; 1133 description 1134 "IP address identifying the mount target."; 1135 } 1136 } 1137 case URI { 1138 leaf uri { 1139 type inet:uri; 1140 description 1141 "URI identifying the mount target"; 1142 } 1143 } 1144 case host-name { 1145 leaf hostname { 1146 type inet:host; 1147 description 1148 "Host name of mount target."; 1149 } 1150 } 1151 case node-ID { 1152 leaf node-info-ref { 1153 type subtree-ref; 1154 description 1155 "Node identified by named subtree."; 1156 } 1157 } 1158 case other { 1159 leaf opaque-target-ID { 1160 type string; 1161 description 1162 "Catch-all; could be used also for mounting 1163 of data nodes that are local."; 1164 } 1165 } 1166 } 1167 } 1168 } 1170 grouping mount-policies { 1171 description 1172 "This grouping contains data nodes that allow to configure 1173 policies associated with mountpoints."; 1175 leaf manual-mount { 1176 type empty; 1177 description 1178 "When present, a specified mountpoint is not 1179 automatically mounted when the mount data node is 1180 created, but needs to mounted via specific RPC 1181 invocation."; 1182 } 1183 leaf retry-timer { 1184 type uint16; 1185 units "seconds"; 1186 description 1187 "When specified, provides the period after which 1188 mounting will be automatically reattempted in case of a 1189 mount status of an unreachable target"; 1190 } 1191 leaf number-of-retries { 1192 type uint8; 1193 description 1194 "When specified, provides a limit for the number of 1195 times for which retries will be automatically 1196 attempted"; 1197 } 1198 } 1200 rpc mount { 1201 description 1202 "This RPC allows an application or administrative user to 1203 perform a mount operation. If successful, it will result in 1204 the creation of a new mountpoint."; 1205 input { 1206 leaf mountpoint-id { 1207 type string { 1208 length "1..32"; 1209 } 1210 description 1211 "Identifier for the mountpoint to be created. 1212 The mountpoint-id needs to be unique; 1213 if the mountpoint-id of an existing mountpoint is 1214 chosen, an error is returned."; 1215 } 1216 } 1217 output { 1218 leaf mount-status { 1219 type mount-status; 1220 description 1221 "Indicates if the mount operation was successful."; 1222 } 1224 } 1225 } 1226 rpc unmount { 1227 description 1228 "This RPC allows an application or administrative user to 1229 unmount information from a remote datastore. If successful, 1230 the corresponding mountpoint will be removed from the 1231 datastore."; 1232 input { 1233 leaf mountpoint-id { 1234 type string { 1235 length "1..32"; 1236 } 1237 description 1238 "Identifies the mountpoint to be unmounted."; 1239 } 1240 } 1241 output { 1242 leaf mount-status { 1243 type mount-status; 1244 description 1245 "Indicates if the unmount operation was successful."; 1246 } 1247 } 1248 } 1249 container mount-server-mgmt { 1250 if-feature mount-server-mgmt; 1251 description 1252 "Contains information associated with managing the 1253 mountpoints of a datastore."; 1254 container mountpoints { 1255 description 1256 "Keep the mountpoint information consolidated 1257 in one place."; 1258 list mountpoint { 1259 key "mountpoint-id"; 1260 description 1261 "There can be multiple mountpoints. 1262 Each mountpoint is represented by its own 1263 list element."; 1264 leaf mountpoint-id { 1265 type string { 1266 length "1..32"; 1267 } 1268 description 1269 "An identifier of the mountpoint. 1270 RPC operations refer to the mountpoint 1271 using this identifier."; 1273 } 1274 leaf mountpoint-origin { 1275 type enumeration { 1276 enum "client" { 1277 description 1278 "Mountpoint has been supplied and is 1279 manually administered by a client"; 1280 } 1281 enum "auto" { 1282 description 1283 "Mountpoint is automatically 1284 administered by the server"; 1285 } 1286 } 1287 config false; 1288 description 1289 "This describes how the mountpoint came 1290 into being."; 1291 } 1292 leaf subtree-ref { 1293 type subtree-ref; 1294 mandatory true; 1295 description 1296 "Identifies the root of the subtree in the 1297 target system that is to be mounted."; 1298 } 1299 uses mount-target; 1300 uses mount-monitor; 1301 uses mount-policies; 1302 } 1303 } 1304 container global-mount-policies { 1305 description 1306 "Provides mount policies applicable for all mountpoints, 1307 unless overridden for a specific mountpoint."; 1308 uses mount-policies; 1309 } 1310 } 1311 } 1313 1315 7. Security Considerations 1317 TBD 1319 8. Acknowledgements 1321 We wish to acknowledge the helpful contributions, comments, and 1322 suggestions that were received from Tony Tkacik, Ambika Tripathy, 1323 Robert Varga, Prabhakara Yellai, Shashi Kumar Bansal, Lukas Sedlak, 1324 and Benoit Claise. 1326 9. References 1328 9.1. Normative References 1330 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", 1331 RFC 2131, DOI 10.17487/RFC2131, March 1997, 1332 . 1334 [RFC2866] Rigney, C., "RADIUS Accounting", RFC 2866, 1335 DOI 10.17487/RFC2866, June 2000, 1336 . 1338 [RFC3768] Hinden, R., Ed., "Virtual Router Redundancy Protocol 1339 (VRRP)", RFC 3768, DOI 10.17487/RFC3768, April 2004, 1340 . 1342 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1343 Resource Identifier (URI): Generic Syntax", STD 66, 1344 RFC 3986, DOI 10.17487/RFC3986, January 2005, 1345 . 1347 [RFC6020] Bjorklund, M., Ed., "YANG - A Data Modeling Language for 1348 the Network Configuration Protocol (NETCONF)", RFC 6020, 1349 DOI 10.17487/RFC6020, October 2010, 1350 . 1352 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1353 and A. Bierman, Ed., "Network Configuration Protocol 1354 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1355 . 1357 [RFC6536] Bierman, A. and M. Bjorklund, "Network Configuration 1358 Protocol (NETCONF) Access Control Model", RFC 6536, 1359 DOI 10.17487/RFC6536, March 2012, 1360 . 1362 [RFC7223] Bjorklund, M., "A YANG Data Model for Interface 1363 Management", RFC 7223, DOI 10.17487/RFC7223, May 2014, 1364 . 1366 9.2. Informative References 1368 [I-D.bjorklund-netmod-structural-mount] 1369 Bjorklund, M., "YANG Structural Mount", draft-I- 1370 D.bjorklund-netmod-structural-mount-02 (work in progress), 1371 February 2016. 1373 [I-D.haas-i2rs-netmod-netconf-requirements] 1374 Haas, J., "I2RS Requirements for Netmod/Netconf", draft- 1375 haas-i2rs-netmod-netconf-requirements-01 (work in 1376 progress), March 2015. 1378 [I-D.ietf-i2rs-pub-sub-requirements] 1379 Voit, E., Clemm, A., and A. Gonzalez Prieto, "Requirements 1380 for subscription to YANG datastores", draft-ietf-i2rs-pub- 1381 sub-requirements-05 (work in progress), February 2016. 1383 [I-D.ietf-netconf-restconf] 1384 Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1385 Protocol", draft-ietf-netconf-restonf-04 (work in 1386 progress), January 2015. 1388 [I-D.ietf-netconf-yang-push] 1389 Clemm, A., Gonzalez Prieto, A., Voit, E., Tripathy, A., 1390 and E. Nilsen-Nygaard, "Subscribing to YANG datastore push 1391 updates", draft-ietf-netconf-yang-push-02 (work in 1392 progress), March 2016. 1394 [I-D.voit-netmod-yang-mount-requirements] 1395 Voit, E., Clemm, A., and S. Mertens, "Requirements for 1396 mounting of local and remote YANG subtrees", draft-voit- 1397 netmod-yang-mount-requirements-00 (work in progress), 1398 March 2016. 1400 Appendix A. Example 1402 In the following example, we are assuming the use case of a network 1403 controller that wants to provide a controller network view to its 1404 client applications. This view needs to include network abstractions 1405 that are maintained by the controller itself, as well as certain 1406 information about network devices where the network abstractions tie 1407 in with element-specific information. For this purpose, the network 1408 controller leverages the mount capability specified in this document 1409 and presents a fictitious Controller Network YANG Module that is 1410 depicted in the outlined structure below. The example illustrates 1411 how mounted information is leveraged by the mounting datastore to 1412 provide an additional level of information that ties together network 1413 and device abstractions, which could not be provided otherwise 1414 without introducing a (redundant) model to replicate those device 1415 abstractions 1417 rw controller-network 1418 +-- rw topologies 1419 | +-- rw topology [topo-id] 1420 | +-- rw topo-id node-id 1421 | +-- rw nodes 1422 | | +-- rw node [node-id] 1423 | | +-- rw node-id node-id 1424 | | +-- rw supporting-ne network-element-ref 1425 | | +-- rw termination-points 1426 | | +-- rw term-point [tp-id] 1427 | | +-- tp-id tp-id 1428 | | +-- ifref mountedIfRef 1429 | +-- rw links 1430 | +-- rw link [link-id] 1431 | +-- rw link-id link-id 1432 | +-- rw source tp-ref 1433 | +-- rw dest tp-ref 1434 +-- rw network-elements 1435 +-- rw network-element [element-id] 1436 +-- rw element-id element-id 1437 +-- rw element-address 1438 | +-- ... 1439 +-- M interfaces 1441 The controller network model consists of the following key 1442 components: 1444 o A container with a list of topologies. A topology is a graph 1445 representation of a network at a particular layer, for example, an 1446 IS-IS topology, an overlay topology, or an Openflow topology. 1447 Specific topology types can be defined in their own separate YANG 1448 modules that augment the controller network model. Those 1449 augmentations are outside the scope of this example 1451 o An inventory of network elements, along with certain information 1452 that is mounted from each element. The information that is 1453 mounted in this case concerns interface configuration information. 1454 For this purpose, each list element that represents a network 1455 element contains a corresponding mountpoint. The mountpoint uses 1456 as its target the network element address information provided in 1457 the same list element 1459 o Each topology in turn contains a container with a list of nodes. 1460 A node is a network abstraction of a network device in the 1461 topology. A node is hosted on a network element, as indicated by 1462 a network-element leafref. This way, the "logical" and "physical" 1463 aspects of a node in the network are cleanly separated. 1465 o A node also contains a list of termination points that terminate 1466 links. A termination point is implemented on an interface. 1467 Therefore, it contains a leafref that references the corresponding 1468 interface configuration which is part of the mounted information 1469 of a network element. Again, the distinction between termination 1470 points and interfaces provides a clean separation between logical 1471 concepts at the network topology level and device-specific 1472 concepts that are instantiated at the level of a network element. 1473 Because the interface information is mounted from a different 1474 datastore and therefore occurs at a different level of the 1475 containment hierarchy than it would if it were not mounted, it is 1476 not possible to use the interface-ref type that is defined in YANG 1477 data model for interface management [] to allow the termination 1478 point refer to its supporting interface. For this reason, a new 1479 type definition "mountedIfRef" is introduced that allows to refer 1480 to interface information that is mounted and hence has a different 1481 path. 1483 o Finally, a topology also contains a container with a list of 1484 links. A link is a network abstraction that connects nodes via 1485 node termination points. In the example, directional point-to- 1486 point links are depicted in which one node termination point 1487 serves as source, another as destination. 1489 The following is a YANG snippet of the module definition which makes 1490 use of the mountpoint definition. 1492 1493 module controller-network { 1494 namespace "urn:cisco:params:xml:ns:yang:controller-network"; 1495 // example only, replace with IANA namespace when assigned 1496 prefix cn; 1497 import mount { 1498 prefix mnt; 1499 } 1500 import interfaces { 1501 prefix if; 1502 } 1503 ... 1504 typedef mountedIfRef { 1505 type leafref { 1506 path "/cn:controller-network/cn:network-elements/" 1507 +"cn:network-element/cn:interfaces/if:interface/if:name"; 1508 // cn:interfaces corresponds to the mountpoint 1509 } 1510 } 1511 ... 1512 list termination-point { 1513 key "tp-id"; 1514 ... 1515 leaf ifref { 1516 type mountedIfRef; 1517 } 1518 ... 1519 list network-element { 1520 key "element-id"; 1521 leaf element-id { 1522 type element-ID; 1523 } 1524 container element-address { 1525 ... // choice definition that allows to specify 1526 // host name, 1527 // IP addresses, URIs, etc 1528 } 1529 mnt:mountpoint "interfaces" { 1530 mnt:target "./element-address"; 1531 mnt:subtree "/if:interfaces"; 1532 } 1533 ... 1534 } 1535 ... 1536 1538 Finally, the following contains an XML snippet of instantiated YANG 1539 information. We assume three datastores: NE1 and NE2 each have a 1540 datastore (the mount targets) that contains interface configuration 1541 data, which is mounted into NC's datastore (the mount client). 1543 Interface information from NE1 datastore: 1545 1546 1547 fastethernet-1/0 1548 ethernetCsmacd 1549 1/0 1550 1551 1552 fastethernet-1/1 1553 ethernetCsmacd 1554 1/1 1555 1556 1558 Interface information from NE2 datastore: 1559 1560 1561 fastethernet-1/0 1562 ethernetCsmacd 1563 1/0 1564 1565 1566 fastethernet-1/2 1567 ethernetCsmacd 1568 1/2 1569 1570 1572 NC datastore with mounted interface information from NE1 and NE2: 1574 1575 ... 1576 1577 1578 NE1 1579 .... 1580 1581 1582 fastethernet-1/0 1583 ethernetCsmacd 1584 1/0 1585 1586 1587 fastethernet-1/1 1588 ethernetCsmacd 1589 1/1 1590 1591 1592 1593 1594 NE2 1595 .... 1596 1597 1598 fastethernet-1/0 1599 ethernetCsmacd 1600 1/0 1601 1602 1603 fastethernet-1/2 1604 ethernetCsmacd 1605 1/2 1606 1607 1608 1609 1610 ... 1611 1613 Authors' Addresses 1615 Alexander Clemm 1616 Cisco Systems 1618 EMail: alex@cisco.com 1619 Jan Medved 1620 Cisco Systems 1622 EMail: jmedved@cisco.com 1624 Eric Voit 1625 Cisco Systems 1627 EMail: evoit@cisco.com