idnits 2.17.1 draft-clemm-netmod-mount-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 636: '... A mountpoint MUST be contained unde...' RFC 2119 keyword, line 644: '...the mount target SHOULD refer to a con...' RFC 2119 keyword, line 650: '...tively leaf-list SHOULD be mounted ins...' RFC 2119 keyword, line 655: '... mountpoints MUST NOT introduce circ...' RFC 2119 keyword, line 656: '...ounted datastore MUST NOT contain a mo...' (20 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1402 has weird spacing: '...ting-ne netw...' == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 19, 2016) is 2776 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 3768 (Obsoleted by RFC 5798) ** Obsolete normative reference: RFC 6536 (Obsoleted by RFC 8341) ** Obsolete normative reference: RFC 7223 (Obsoleted by RFC 8343) -- No information found for draft-ietf-netconf-restonf - is the name correct? == Outdated reference: A later version (-25) exists of draft-ietf-netconf-yang-push-03 Summary: 5 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clemm 3 Internet-Draft J. Medved 4 Intended status: Experimental E. Voit 5 Expires: March 23, 2017 Cisco Systems 6 September 19, 2016 8 Mounting YANG-Defined Information from Remote Datastores 9 draft-clemm-netmod-mount-05.txt 11 Abstract 13 This document introduces capabilities that allow YANG datastores to 14 reference and incorporate information from remote datastores. This 15 is accomplished by extending YANG with the ability to define mount 16 points that reference data nodes in another YANG subtree, by 17 subsequently allowing those data nodes to be accessed by client 18 applications as if part of an alternative data hierarchy, and by 19 providing the necessary means to manage and administer those mount 20 points. Two flavors are defined: Alias-Mount allows to mount local 21 subtrees, while Peer-Mount allows subtrees to reside on and be 22 authoritatively owned by a remote server. YANG-Mount facilitates the 23 development of applications that need to access data that transcends 24 individual network devices while improving network-wide object 25 consistency, or that require an aliasing capability to be able to 26 create overlay structures for YANG data. 28 Status of This Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on March 23, 2017. 45 Copyright Notice 47 Copyright (c) 2016 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 This document may contain material from IETF Documents or IETF 61 Contributions published or made publicly available before November 62 10, 2008. The person(s) controlling the copyright in some of this 63 material may not have granted the IETF Trust the right to allow 64 modifications of such material outside the IETF Standards Process. 65 Without obtaining an adequate license from the person(s) controlling 66 the copyright in such materials, this document may not be modified 67 outside the IETF Standards Process, and derivative works of it may 68 not be created outside the IETF Standards Process, except to format 69 it for publication as an RFC or to translate it into languages other 70 than English. 72 Table of Contents 74 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 75 1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 3 76 1.2. Examples . . . . . . . . . . . . . . . . . . . . . . . . 5 77 2. Definitions and Acronyms . . . . . . . . . . . . . . . . . . 7 78 3. Example scenarios . . . . . . . . . . . . . . . . . . . . . . 8 79 3.1. Network controller view . . . . . . . . . . . . . . . . . 8 80 3.2. Consistent network configuration . . . . . . . . . . . . 10 81 4. Operating on mounted data . . . . . . . . . . . . . . . . . . 11 82 4.1. General principles . . . . . . . . . . . . . . . . . . . 12 83 4.2. Data retrieval . . . . . . . . . . . . . . . . . . . . . 12 84 4.3. Other operations . . . . . . . . . . . . . . . . . . . . 13 85 4.4. Other considerations . . . . . . . . . . . . . . . . . . 13 86 5. Data model structure . . . . . . . . . . . . . . . . . . . . 14 87 5.1. YANG mountpoint extensions . . . . . . . . . . . . . . . 14 88 5.2. YANG structure diagrams . . . . . . . . . . . . . . . . . 15 89 5.3. Mountpoint management . . . . . . . . . . . . . . . . . . 15 90 5.4. Caching . . . . . . . . . . . . . . . . . . . . . . . . . 17 91 5.5. Other considerations . . . . . . . . . . . . . . . . . . 18 92 5.5.1. Authorization . . . . . . . . . . . . . . . . . . . . 18 93 5.5.2. Datastore qualification . . . . . . . . . . . . . . . 18 94 5.5.3. Mount cascades . . . . . . . . . . . . . . . . . . . 18 95 5.5.4. Implementation considerations . . . . . . . . . . . . 19 96 5.5.5. Modeling best practices . . . . . . . . . . . . . . . 20 97 6. Datastore mountpoint YANG module . . . . . . . . . . . . . . 20 98 7. Security Considerations . . . . . . . . . . . . . . . . . . . 28 99 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 28 100 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 29 101 9.1. Normative References . . . . . . . . . . . . . . . . . . 29 102 9.2. Informative References . . . . . . . . . . . . . . . . . 30 103 Appendix A. Example . . . . . . . . . . . . . . . . . . . . . . 31 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 35 106 1. Introduction 108 1.1. Overview 110 This document introduces a new capability that allows YANG datastores 111 [RFC6020] to incorporate and reference information from other YANG 112 subtrees. The capability allows a client application to retrieve and 113 have visibility of that YANG data as part of an alternative 114 structure. This is provided by introducing a mountpoint concept. 115 This concept allows to declare a YANG data node in a primary 116 datastore to serve as a "mount point" under which a subtree with YANG 117 data can be mounted. This way, data nodes from another subtree can 118 be inserted into an alternative data hierarchy, arranged below local 119 data nodes. To the user, this provides visibility to data from other 120 subtrees, rendered in a way that makes it appear largely as if it 121 were an integral part of the datastore. This enables users to 122 retrieve local "native" as well as mounted data in integrated 123 fashion, using e.g. Netconf [RFC6241] or Restconf 124 [I-D.ietf-netconf-restconf] data retrieval primitives. The concept 125 is reminiscent of concepts in a Network File System that allows to 126 mount remote folders and make them appear as if they were contained 127 in the local file system of the user's machine. 129 Two variants of YANG-Mount are introduced, which build on one 130 another: 132 o Alias-Mount allows mountpoints to reference a local YANG subtree 133 residing on the same server. It provides effectively an aliasing 134 capability, allowing for an alternative hierarchy and path for the 135 same YANG data. 137 o Peer-Mount allows mountpoints to reference a remote YANG subtree, 138 residing on a different server. It can be thought of as an 139 extension to Alias-Mount, in which a remote server can be 140 specified. Peer-Mount allows a server to effectively provide a 141 federated datastore, including YANG data from across the network. 143 In each case, mounted data is authoritatively owned by the server 144 that it is a part of. Validation of integrity constraints apply to 145 the authoritative copy; mounting merely provides a different view of 146 the same data. It does not impose additional constraints on that 147 same data; however, mounted data may be referred to from other data 148 nodes. The mountpoint concept applies in principle to operations 149 beyond data retrieval, i.e. to configuration, RPCs, and 150 notifications. However, support for such operations involves 151 additional considerations, for example if support for configuration 152 transactions and locking (which might now apply across the network) 153 were to be provided. While it is conceivable that additional 154 capabilities for operations on mounted information are introduced at 155 some point in time, their specification is beyond the scope of this 156 specification. 158 YANG does provide means by which modules that have been separately 159 defined can reference and augment one another. YANG also does 160 provide means to specify data nodes that reference other data nodes. 161 However, all the data is assumed to be instantiated as part of the 162 same datastore, for example a datastore provided through a NETCONF 163 server. Existing YANG mechanisms do not account for the possibility 164 that some information that needs to be referred not only resides in a 165 different subtree of the same datastore, or was defined in a separate 166 module that is also instantiated in the same datastore, but that is 167 genuinely part of a different datastore that is provided by a 168 different server. 170 The ability to mount information from local and remote datastores is 171 new and not covered by existing YANG mechanisms. Until now, 172 management information provided in a datastore has been intrinsically 173 tied to the same server and to a single data hierarchy. In contrast, 174 the capability introduced in this specification allows the server to 175 render alternative data hierarchies, and to represent information 176 from remote systems as if it were its own and contained in its own 177 local data hierarchy. 179 The capability of allowing the mounting of information from other 180 subtrees is accomplished by a set of YANG extensions that allow to 181 define such mount points. For this purpose, a new YANG module is 182 introduced. The module defines the YANG extensions, as well as a 183 data model that can be used to manage the mountpoints and mounting 184 process itself. Only the mounting module and its server (i.e. the 185 "receivers" or "consumers" of the mounted information) need to be 186 aware of the concepts introduced here. Mounting is transparent to 187 the "providers" of the mounted information and models that are being 188 mounted; any data nodes or subtrees within any YANG model can be 189 mounted. 191 Alias-Mount and Peer-Mount build on top of each other. It is 192 possible for a server to support Alias-Mount but not Peer-Mount. In 193 essence, Peer-Mount requires an additional parameter that is used to 194 refer to the target system. This parameter does not need to be 195 supported if only Alias-Mount is provided. 197 Finally, it should be mentioned that Alias-Mount and Peer-Mount are 198 not to be confused with the ability to mount a schema, aka Schema 199 Mount. A Schema Mount allows to instantiate an existing model 200 definition underneath a mount point, not reference a set of YANG data 201 that has already been instantiated somewhere else. In that sense, 202 Schema-Mount resembles more a "grouping" concept that allows to reuse 203 an existing definition in a new context, as opposed to referencing 204 and incorporating existing instance information into a new context. 206 1.2. Examples 208 The requirements for mounting YANG subtrees from remote datastores, 209 as long as a set of associated use cases, are documented in 210 [I-D.voit-netmod-yang-mount-requirements]. The ability to mount data 211 from remote datastores is useful to address various problems that 212 several categories of applications are faced with. 214 One category of applications that can leverage this capability are 215 network controller applications that need to present a consolidated 216 view of management information in datastores across a network. 217 Controller applications are faced with the problem that in order to 218 expose information, that information needs to be part of their own 219 datastore. Today, this requires support of a corresponding YANG data 220 module. In order to expose information that concerns other network 221 elements, that information has to be replicated into the controller's 222 own datastore in the form of data nodes that may mirror but are 223 clearly distinct from corresponding data nodes in the network 224 element's datastore. In addition, in many cases, a controller needs 225 to impose its own hierarchy on the data that is different from the 226 one that was defined as part of the original module. An example for 227 this concerns interface data, both operational data (e.g. various 228 types of interface statistics) and configuration data, such as 229 defined in [RFC7223]. This data will be contained in a top-level 230 container ("interfaces", in this particular case) in a network 231 element datastore. The controller may need to provide its clients a 232 view on interface data from multiple devices under its scope of 233 control. One way of to do so would involve organizing the data in a 234 list with separate list elements for each device. However, this in 235 turn would require introduction of redundant YANG modules that 236 effectively replicate the same interface data save for differences in 237 hierarchy. 239 By directly mounting information from network element datastores, the 240 controller does not need to replicate the same information from 241 multiple datastores, nor does it need to re-define any network 242 element and system-level abstractions to be able to put them in the 243 context of network abstractions. Instead, the subtree of the remote 244 system is attached to the local mount point. Operations that need to 245 access data below the mount point are in effect transparently 246 redirected to remote system, which is the authoritative owner of the 247 data. The mounting system does not even necessarily need to be aware 248 of the specific data in the remote subtree. Optionally, caching 249 strategies can be employed in which the mounting system prefetches 250 data. 252 A second category of applications concerns decentralized networking 253 applications that require globally consistent configuration of 254 parameters. When each network element maintains its own datastore 255 with the same configurable settings, a single global change requires 256 modifying the same information in many network elements across a 257 network. In case of inconsistent configurations, network failures 258 can result that are difficult to troubleshoot. In many cases, what 259 is more desirable is the ability to configure such settings in a 260 single place, then make them available to every network element. 261 Today, this requires in general the introduction of specialized 262 servers and configuration options outside the scope of NETCONF, such 263 as RADIUS [RFC2866] or DHCP [RFC2131]. In order to address this 264 within the scope of NETCONF and YANG, the same information would have 265 to be redundantly modeled and maintained, representing operational 266 data (mirroring some remote server) on some network elements and 267 configuration data on a designated master. Either way, additional 268 complexity ensues. 270 Instead of replicating the same global parameters across different 271 datastores, the solution presented in this document allows a single 272 copy to be maintained in a subtree of single datastore that is then 273 mounted by every network element that requires awareness of these 274 parameters. The global parameters can be hosted in a controller or a 275 designated network element. This considerably simplifies the 276 management of such parameters that need to be known across elements 277 in a network and require global consistency. 279 It should be noted that for these and many other applications merely 280 having a view of the remote information is sufficient. It allows to 281 define consolidated views of information without the need for 282 replicating data and models that have already been defined, to audit 283 information, and to validate consistency of configurations across a 284 network. Only retrieval operations are required; no operations that 285 involve configuring remote data are involved. 287 2. Definitions and Acronyms 289 Data node: An instance of management information in a YANG datastore. 291 DHCP: Dynamic Host Configuration Protocol. 293 Datastore: A conceptual store of instantiated management information, 294 with individual data items represented by data nodes which are 295 arranged in hierarchical manner. 297 Datastore-push: A mechanism that allows a client to subscribe to 298 updates from a datastore, which are then automatically pushed by the 299 server to the client. 301 Data subtree: An instantiated data node and the data nodes that are 302 hierarchically contained within it. 304 Mount client: The system at which the mount point resides, into which 305 the remote subtree is mounted. 307 Mount point: A data node that receives the root node of the remote 308 datastore being mounted. 310 Mount server: The server with which the mount client communicates and 311 which provides the mount client with access to the mounted 312 information. Can be used synonymously with mount target. 314 Mount target: A remote server whose datastore is being mounted. 316 NACM: NETCONF Access Control Model 318 NETCONF: Network Configuration Protocol 320 RADIUS: Remote Authentication Dial In User Service. 322 RPC: Remote Procedure Call 324 Remote datastore: A datastore residing at a remote node. 326 URI: Uniform Resource Identifier 328 YANG: A data definition language for NETCONF 330 3. Example scenarios 332 The following example scenarios outline some of the ways in which the 333 ability to mount YANG datastores can be applied. Other mount 334 topologies can be conceived in addition to the ones presented here. 336 3.1. Network controller view 338 Network controllers can use the mounting capability to present a 339 consolidated view of management information across the network. This 340 allows network controllers to expose network-wide abstractions, such 341 as topologies or paths, multi-device abstractions, such as VRRP 342 [RFC3768], and network-element specific abstractions, such as 343 information about a network element's interfaces. 345 While an application on top of a controller could bypass the 346 controller to access network elements directly for their element- 347 specific abstractions, this would come at the expense of added 348 inconvenience for the client application. In addition, it would 349 compromise the ability to provide layered architectures in which 350 access to the network by controller applications is truly channeled 351 through the controller. 353 Without a mounting capability, a network controller would need to at 354 least conceptually replicate data from network elements to provide 355 such a view, incorporating network element information into its own 356 controller model that is separate from the network element's, 357 indicating that the information in the controller model is to be 358 populated from network elements. This can introduce issues such as 359 data inconsistency and staleness. Equally important, it would lead 360 to the need to define redundant data models: one model that is 361 implemented by the network element itself, and another model to be 362 implemented by the network controller. This leads to poor 363 maintainability, as analogous information has to be redundantly 364 defined and implemented across different data models. In general, 365 controllers cannot simply support the same modules as their network 366 elements for the same information because that information needs to 367 be put into a different context. This leads to "node"-information 368 that needs to be instantiated and indexed differently, because there 369 are multiple instances across different data stores. 371 For example, "system"-level information of a network element would 372 most naturally placed into a top-level container at that network 373 element's datastore. At the same time, the same information in the 374 context of the overall network, such as maintained by a controller, 375 might better be provided in a list. For example, the controller 376 might maintain a list with a list element for each network element, 377 underneath which the network element's system-level information is 378 contained. However, the containment structure of data nodes in a 379 module, once defined, cannot be changed. This means that in the 380 context of a network controller, a second module that repeats the 381 same system-level information would need to be defined, implemented, 382 and maintained. Any augmentations that add additional system-level 383 information to the original module will likewise need to be 384 redundantly defined, once for the "system" module, a second time for 385 the "controller" module. 387 By allowing a network controller to directly mount information from 388 network element datastores, the controller does not need to replicate 389 the same information from multiple datastores. Perhaps even more 390 importantly, the need to re-define any network element and system- 391 level abstractions just to be able to put them in the context of 392 network abstractions is avoided. In this solution, a network 393 controller's datastore mounts information from many network element 394 datastores. For example, the network controller datastore (the 395 "primary" datastore) could implement a list in which each list 396 element contains a mountpoint. Each mountpoint mounts a subtree from 397 a different network element's datastore. The data from the mounted 398 subtrees is then accessible to clients of the primary datastore using 399 the usual data retrieval operations. 401 This scenario is depicted in Figure 1. In the figure, M1 is the 402 mountpoint for the datastore in Network Element 1 and M2 is the 403 mountpoint for the datastore in Network Element 2. MDN1 is the 404 mounted data node in Network Element 1, and MDN2 is the mounted data 405 node in Network Element 2. 407 +-------------+ 408 | Network | 409 | Controller | 410 | Datastore | 411 | | 412 | +--N10 | 413 | +--N11 | 414 | +--N12 | 415 | +--M1******************************* 416 | +--M2****** * 417 | | * * 418 +-------------+ * * 419 * +---------------+ * +---------------+ 420 * | +--N1 | * | +--N5 | 421 * | +--N2 | * | +--N6 | 422 ********> +--MDN2 | *********> +--MDN1 | 423 | +--N3 | | +--N7 | 424 | +--N4 | | +--N8 | 425 | | | | 426 | Network | | Network | 427 | Element | | Element | 428 | Datastore | | Datastore | 429 +---------------+ +---------------+ 431 Figure 1: Network controller mount topology 433 3.2. Consistent network configuration 435 A second category of applications concerns decentralized networking 436 applications that require globally consistent configuration of 437 parameters that need to be known across elements in a network. 438 Today, the configuration of such parameters is generally performed on 439 a per network element basis, which is not only redundant but, more 440 importantly, error-prone. Inconsistent configurations lead to 441 erroneous network behavior that can be challenging to troubleshoot. 443 Using the ability to mount information from remote datastores opens 444 up a new possibility for managing such settings. Instead of 445 replicating the same global parameters across different datastores, a 446 single copy is maintained in a subtree of single datastore. This 447 datastore can hosted in a controller or a designated network element. 448 The subtree is subsequently mounted by every network element that 449 requires access to these parameters. 451 In many ways, this category of applications is an inverse of the 452 previous category: Whereas in the network controller case data from 453 many different datastores would be mounted into the same datastore 454 with multiple mountpoints, in this case many elements, each with 455 their own datastore, mount the same remote datastore, which is then 456 mounted by many different systems. 458 The scenario is depicted in Figure 2. In the figure, M1 is the 459 mountpoint for the Network Controller datastore in Network Element 1 460 and M2 is the mountpoint for the Network Controller datastore in 461 Network Element 2. MDN is the mounted data node in the Network 462 Controller datastore that contains the data nodes that represent the 463 shared configuration settings. (Note that there is no reason why the 464 Network Controller Datastore in this figure could not simply reside 465 on a network element itself; the division of responsibilities is a 466 logical one. 468 +---------------+ +---------------+ 469 | Network | | Network | 470 | Element | | Element | 471 | Datastore | | Datastore | 472 | | | | 473 | +--N1 | | +--N5 | 474 | | +--N2 | | | +--N6 | 475 | | +--N2 | | | +--N6 | 476 | | +--N3 | | | +--N7 | 477 | | +--N4 | | | +--N8 | 478 | | | | | | 479 | +--M1 | | +--M2 | 480 +-----*---------+ +-----*---------+ 481 * * +---------------+ 482 * * | | 483 * * | +--N10 | 484 * * | +--N11 | 485 *********************************************> +--MDN | 486 | +--N20 | 487 | +--N21 | 488 | ... | 489 | +--N22 | 490 | | 491 | Network | 492 | Controller | 493 | Datastore | 494 +---------------+ 496 Figure 2: Distributed config settings topology 498 4. Operating on mounted data 500 This section provides a rough illustration of the operations flow 501 involving mounted datastores. 503 4.1. General principles 505 The first thing that should be noted about these operations flows 506 concerns the fact that a mount client essentially constitutes a 507 special management application that interacts with a subtree to 508 render the data of that subtree as an alternative tree hierarchy. In 509 the case of Alias-Mount, both original and alternative tree are 510 maintained by the same server, which in effect provides alternative 511 paths to the same data. In the case of Peer-Mount, the mount client 512 constitutes in effect another application, with the remote system 513 remaining the authoritative owner of the data. While it is 514 conceivable that the remote system (or an application that proxies 515 for the remote system) provides certain functionality to facilitate 516 the specific needs of the mount client to make it more efficient, the 517 fact that another system decides to expose a certain "view" of that 518 data is fundamentally not the remote system's concern. 520 When a client application makes a request to a server that involves 521 data that is mounted from a remote system, the server will 522 effectively act as a proxy to the remote system on the client 523 application's behalf. It will extract from the client application 524 request the portion that involves the mounted subtree from the remote 525 system. It will strip that portion of the local context, i.e. remove 526 any local data paths and insert the data path of the mounted remote 527 subtree, as appropriate. The server will then forward the transposed 528 request to the remote system that is the authoritative owner of the 529 mounted data, acting itself as a client to the remote server. Upon 530 receiving the reply, the server will transpose the results into the 531 local context as needed, for example map the data paths into the 532 local data tree structure, and combine those results with the results 533 of the remainder portion of the original request. 535 4.2. Data retrieval 537 Data retrieval operations are the only category of operations that is 538 supported for peer-mounted information. In that case, a Netconf 539 "get" or "get-configuration" operation might be applied on a subtree 540 whose scope includes a mount point. When resolving the mount point, 541 the server issues its own "get" or "get-configuration" request 542 against the remote system's subtree that is attached to the mount 543 point. The returned information is then inserted into the data 544 structure that is in turn returned to the client that originally 545 invoked the request. 547 4.3. Other operations 549 The fact that only data retrieval operations are the only category of 550 operations that are supported for peer-mounted information does not 551 preclude other operations to be applied to datastore subtrees that 552 contain mountpoints and peer-mounted information. Peer-mounted 553 information is simply transparent to those operations. When an 554 operation is applied to a subtree which includes mountpoints, mounted 555 information is ignored for purposes of the operation. For example, 556 for a Netconf "edit-config" operation that includes a subtree with a 557 mountpoint, a server will ignore the data under the mountpoint and 558 apply the operation only to the local configuration. Mounted data is 559 "read-only" data. The server does not even need to return an error 560 message that the operation could not be applied to mounted data; the 561 mountpoint is simply ignored. 563 In principle, it is conceivable that operations other than data- 564 retrieval are applied to mounted data as well. For example, an 565 operation to edit configuration information might expect edits to be 566 applied to remote systems as part of the operation, where the edited 567 subtree involves mounted information. However, editing of 568 information and "writing through" to remote systems potentially 569 involves significant complexity, particularly if transactions and 570 locking across multiple configuration items are involved. Support 571 for such operations will require additional capabilities, 572 specification of which is beyond the scope of this specification. 574 Likewise, YANG-Mount does not extend towards RPCs that are defined as 575 part of YANG modules whose contents is being mounted. Support for 576 RPCs that involve mounted portions of the datastore, while 577 conceivable, would require introduction of an additional capability, 578 whose definition is outside the scope of this specification. 580 By the same token, YANG-Mount does not extend towards notifications. 581 It is conceivable to offer such support in the future using a 582 separate capability, definition of which is once again outside the 583 scope of this specification. 585 4.4. Other considerations 587 Since mounting of information typically involves communication with a 588 remote system, there is a possibility that the remote system will not 589 respond within a certain amount of time, that connectivity is lost, 590 or that other errors occur. Accordingly, the ability to mount 591 datastores also involves mountpoint management, which includes the 592 ability to configure timeouts, retries, and management of mountpoint 593 state (including dynamic addition removal of mountpoints). 594 Mountpoint management will be discussed in section Section 5.3. 596 It is expected that some implementations will introduce caching 597 schemes. Caching can increase performance and efficiency in certain 598 scenarios (for example, in the case of data that is frequently read 599 but that rarely changes), but increases implementation complexity. 600 Caching is not required for YANG-mount to work - in which case access 601 to mounted information is "on-demand", in which the authoritative 602 data node always gets accessed. Whether to perform caching is a 603 local implementation decision. 605 When caching is introduced, it can benefit from the ability to 606 subscribe to updates on remote data by remote servers. Requirements 607 for such a capability have been defined in [RFC7923]. Some 608 optimizations to facilitate caching support will be discussed in 609 section Section 5.4. 611 5. Data model structure 613 5.1. YANG mountpoint extensions 615 At the center of the module is a set of YANG extensions that allow to 616 define a mountpoint. 618 o The first extension, "mountpoint", is used to declare a 619 mountpoint. The extension takes the name of the mountpoint as an 620 argument. 622 o The second extension, "subtree", serves as substatement underneath 623 a mountpoint statement. It takes an argument that defines the 624 root node of the datastore subtree that is to be mounted, 625 specified as string that contains a path expression. This 626 extension is used to define mountpoints for Alias-Mount, as well 627 as Peer-Mount. 629 o The third extension, "target", also serves as a substatement 630 underneath a mountpoint statement. It is used for Peer-Mount and 631 takes an argument that identifies the target system. The argument 632 is a reference to a data node that contains the information that 633 is needed to identify and address a remote server, such as an IP 634 address, a host name, or a URI [RFC3986]. 636 A mountpoint MUST be contained underneath a container. Future 637 revisions might allow for mountpoints to be contained underneath 638 other data nodes, such as lists, leaf-lists, and cases. However, to 639 keep things simple, at this point mounting is only allowed directly 640 underneath a container. 642 Only a single data node can be mounted at one time. While the mount 643 target could refer to any data node, it is recommended that as a best 644 practice, the mount target SHOULD refer to a container. It is 645 possible to maintain e.g. a list of mount points, with each mount 646 point each of which has a mount target an element of a remote list. 647 However, to avoid unnecessary proliferation of the number of mount 648 points and associated management overhead, when data from lists or 649 leaf-lists is to be mounted, a container containing the list 650 respectively leaf-list SHOULD be mounted instead of individual list 651 elements. 653 It is possible for a mounted datastore to contain another mountpoint, 654 thus leading to several levels of mount indirections. However, 655 mountpoints MUST NOT introduce circular dependencies. In particular, 656 a mounted datastore MUST NOT contain a mountpoint which specifies the 657 mounting datastore as a target and a subtree which contains as root 658 node a data node that in turn contains the original mountpoint. 659 Whenever a mount operation is performed, this condition mountpoint. 660 Whenever a mount operation is performed, this condition MUST be 661 validated by the mount client. 663 5.2. YANG structure diagrams 665 YANG data model structure overviews have proven very useful to convey 666 the "Big Picture". It would be useful to indicate in YANG data model 667 structure overviews the fact that a given data node serves as a 668 mountpoint. We propose for this purpose also a corresponding 669 extension to the structure representation convention. Specifically, 670 we propose to prefix the name of the mounting data node with upper- 671 case 'M'. 673 rw network 674 +-- rw nodes 675 +-- rw node [node-ID] 676 +-- rw node-ID 677 +-- M node-system-info 679 5.3. Mountpoint management 681 The YANG module contains facilities to manage the mountpoints 682 themselves. 684 For this purpose, a list of the mountpoints is introduced. Each list 685 element represents a single mountpoint. It includes an 686 identification of the mount target, i.e. the remote system hosting 687 the remote datastore and a definition of the subtree of the remote 688 data node being mounted. It also includes monitoring information 689 about current status (indicating whether the mount has been 690 successful and is operational, or whether an error condition applies 691 such as the target being unreachable or referring to an invalid 692 subtree). 694 In addition to the list of mountpoints, a set of global mount policy 695 settings allows to set parameters such as mount retries and timeouts. 697 Each mountpoint list element also contains a set of the same 698 configuration knobs, allowing administrators to override global mount 699 policies and configure mount policies on a per-mountpoint basis if 700 needed. 702 There are two ways how mounting occurs: automatic (dynamically 703 performed as part of system operation) or manually (administered by a 704 user or client application). A separate mountpoint-origin object is 705 used to distinguish between manually configured and automatically 706 populated mountpoints. 708 Whether mounting occurs automatically or needs to be manually 709 configured by a user or an application can depend on the mountpoint 710 being defined, i.e. the semantics of the model. 712 When configured automatically, mountpoint information is 713 automatically populated by the datastore that implements the 714 mountpoint. The precise mechanisms for discovering mount targets and 715 bootstrapping mount points are provided by the mount client 716 infrastructure and outside the scope of this specification. 717 Likewise, when a mountpoint should be deleted and when it should 718 merely have its mount-status indicate that the target is unreachable 719 is a system-specific implementation decision. 721 Manual mounting consists of two steps. In a first step, a mountpoint 722 is manually configured by a user or client application through 723 administrative action. Once a mountpoint has been configured, actual 724 mounting occurs through an RPCs that is defined specifically for that 725 purpose. To unmount, a separate RPC is invoked; mountpoint 726 configuration information needs to be explicitly deleted. Manual 727 mounting can also be used to override automatic mounting, for example 728 to allow an administrator to set up or remove a mountpoint. 730 It should be noted that mountpoint management does not allow users to 731 manually "extend" the model, i.e. simply add a subtree underneath 732 some arbitrary data node into a datastore, without a supporting 733 mountpoint defined in the model to support it. A mountpoint 734 definition is a formal part of the model with well-defined semantics. 735 Accordingly, mountpoint management does not allow users to 736 dynamically "extend" the data model itself. It allows users to 737 populate the datastore and mount structure within the confines of a 738 model that has been defined prior. 740 The structure of the mountpoint management data model is depicted in 741 the following figure, where brackets enclose list keys, "rw" means 742 configuration, "ro" operational state data, and "?" designates 743 optional nodes. Parantheses enclose choice and case nodes. The 744 figure does not depict all definitions; it is intended to illustrate 745 the overall structure. 747 module: ietf-mount 748 +--rw mount-server-mgmt {mount-server-mgmt}? 749 +--rw mountpoints 750 | +--rw mountpoint* [mountpoint-id] 751 | +--rw mountpoint-id string 752 | +--ro mountpoint-origin? enumeration 753 | +--rw subtree-ref subtree-ref 754 | +--rw mount-target 755 | | +--rw (target-address-type) 756 | | +--:(IP) 757 | | | +--rw target-ip? inet:ip-address 758 | | +--:(URI) 759 | | | +--rw uri? inet:uri 760 | | +--:(host-name) 761 | | | +--rw hostname? inet:host 762 | | +--:(node-ID) 763 | | | +--rw node-info-ref? subtree-ref 764 | | +--:(other) 765 | | +--rw opaque-target-ID? string 766 | +--ro mount-status? mount-status 767 | +--rw manual-mount? empty 768 | +--rw retry-timer? uint16 769 | +--rw number-of-retries? uint8 770 +--rw global-mount-policies 771 +--rw manual-mount? empty 772 +--rw retry-timer? uint16 773 +--rw number-of-retries? uint8 775 5.4. Caching 777 Under certain circumstances, it can be useful to maintain a cache of 778 remote information. Instead of accessing the remote system, requests 779 are served from a copy that is locally maintained. This is 780 particularly advantageous in cases where data is slow changing, i.e. 781 when there are many more "read" operations than changes to the 782 underlying data node, and in cases when a significant delay were 783 incurred when accessing the remote system, which might be prohibitive 784 for certain applications. Examples of such applications are 785 applications that involve real-time control loops requiring response 786 times that are measured in milliseconds. However, as data nodes that 787 are mounted from an authoritative datastore represent the "golden 788 copy", it is important that any modifications are reflected as soon 789 as they are made. 791 It is a local implementation decision of mount clients whether to 792 cache information once it has been fetched. However, in order to 793 support more powerful caching schemes, it becomes necessary for the 794 mount server to "push" information proactively. For this purpose, it 795 is useful for the mount client to subscribe for updates to the 796 mounted information at the mount server. A corresponding mechanism 797 that can be leveraged for this purpose is specified in 798 [I-D.ietf-netconf-yang-push]. 800 Note that caching large mountpoints can be expensive. Therefore 801 limiting the amount of data unnecessarily passed when mounting near 802 the top of a YANG subtree is important. For these reasons, an 803 ability to specify a particular caching strategy in conjunction with 804 mountpoints can be desirable, including the ability to exclude 805 certain nodes and subtrees from caching. According capabilities may 806 be introduced in a future version of this draft. 808 5.5. Other considerations 810 5.5.1. Authorization 812 Access to mounted information is subject to authorization rules. To 813 the mounted system, a mounting client will in general appear like any 814 other client. Authorization privileges for remote mounting clients 815 need to be specified through NACM (NETCONF Access Control Model) 816 [RFC6536]. 818 5.5.2. Datastore qualification 820 It is conceivable to differentiate between different datastores on 821 the remote server, that is, to designate the name of the actual 822 datastore to mount, e.g. "running" or "startup". However, for the 823 purposes of this spec, we assume that the datastore to be mounted is 824 generally implied. Mounted information is treated as analogous to 825 operational data; in general, this means the running or "effective" 826 datastore is the target. That said, the information which targets to 827 mount does constitute configuration and can hence be part of a 828 startup or candidate datastore. 830 5.5.3. Mount cascades 832 It is possible for the mounted subtree to in turn contain a 833 mountpoint. However, circular mount relationships MUST NOT be 834 introduced. For this reason, a mounted subtree MUST NOT contain a 835 mountpoint that refers back to the mounting system with a mount 836 target that directly or indirectly contains the originating 837 mountpoint. As part of a mount operation, the mount points of the 838 mounted system need to be checked accordingly. 840 5.5.4. Implementation considerations 842 Implementation specifics are outside the scope of this specification. 843 That said, the following considerations apply: 845 Systems that wish to mount information from remote datastores need to 846 implement a mount client. The mount client communicates with a 847 remote system to access the remote datastore. To do so, there are 848 several options: 850 o The mount client acts as a NETCONF client to a remote system. 851 Alternatively, another interface to the remote system can be used, 852 such as a REST API using JSON encodings, as specified in 853 [I-D.ietf-netconf-restconf]. Either way, to the remote system, 854 the mount client constitutes essentially a client application like 855 any other. The mount client in effect IS a special kind of client 856 application. 858 o The mount client communicates with a remote mount server through a 859 separate protocol. The mount server is deployed on the same 860 system as the remote NETCONF datastore and interacts with it 861 through a set of local APIs. 863 o The mount client communicates with a remote mount server that acts 864 as a NETCONF client proxy to a remote system, on the client's 865 behalf. The communication between mount client and remote mount 866 server might involve a separate protocol, which is translated into 867 NETCONF operations by the remote mount server. 869 It is the responsibility of the mount client to manage the 870 association with the target system, e.g. validate it is still 871 reachable by maintaining a permanent association, perform 872 reachability checks in case of a connectionless transport, etc. 874 It is the responsibility of the mount client to manage the 875 mountpoints. This means that the mount client needs to populate the 876 mountpoint monitoring information (e.g. keep mount-status up to data 877 and determine in the case of automatic mounting when to add and 878 remove mountpoint configuration). In the case of automatic mounting, 879 the mount client also interacts with the mountpoint discovery and 880 bootstrap process. 882 The mount client needs to also participate in servicing datastore 883 operations involving mounted information. An operation requested 884 involving a mountpoint is relayed by the mounting system's 885 infrastructure to the mount client. For example, a request to 886 retrieve information from a datastore leads to an invocation of an 887 internal mount client API when a mount point is reached. The mount 888 client then relays a corresponding operation to the remote datastore. 889 It subsequently relays the result along with any responses back to 890 the invoking infrastructure, which then merges the result (e.g. a 891 retrieved subtree with the rest of the information that was 892 retrieved) as needed. Relaying the result may involve the need to 893 transpose error response codes in certain corner cases, e.g. when 894 mounted information could not be reached due to loss of connectivity 895 with the remote server, or when a configuration request failed due to 896 validation error. 898 5.5.5. Modeling best practices 900 There is a certain amount of overhead associated with each mount 901 point. The mount point needs to be managed and state maintained. 902 Data subscriptions need to be maintained. Requests including mounted 903 subtrees need to be decomposed and responses from multiple systems 904 combined. 906 For those reasons, as a general best practice, models that make use 907 of mount points SHOULD be defined in a way that minimizes the number 908 of mountpoints required. Finely granular mounts, in which multiple 909 mountpoints are maintained with the same remote system, each 910 containing only very small data subtrees, SHOULD be avoided. For 911 example, lists SHOULD only contain mountpoints when individual list 912 elements are associated with different remote systems. To mount data 913 from lists in remote datastores, a container node that contains all 914 list elements SHOULD be mounted instead of mounting each list element 915 individually. Likewise, instead of having mount points refer to 916 nodes contained underneath choices, a mountpoint should refer to a 917 container of the choice. 919 6. Datastore mountpoint YANG module 921 922 file "ietf-mount@2016-09-19.yang" 923 module ietf-mount { 924 namespace "urn:ietf:params:xml:ns:yang:ietf-mount"; 925 prefix mnt; 927 import ietf-inet-types { 928 prefix inet; 929 } 931 organization 932 "IETF NETMOD (NETCONF Data Modeling Language) Working Group"; 933 contact 934 "WG Web: 935 WG List: 937 WG Chair: Kent Watsen 938 940 WG Chair: Lou Berger 941 943 Editor: Alexander Clemm 944 946 Editor: Jan Medved 947 949 Editor: Eric Voit 950 "; 951 description 952 "This module provides a set of YANG extensions and definitions 953 that can be used to mount information from remote datastores."; 955 revision 2016-09-19 { 956 description 957 "Initial revision."; 958 reference 959 "draft-clemm-netmod-mount-05.txt"; 960 } 962 extension mountpoint { 963 argument name; 964 description 965 "This YANG extension is used to mount data from another 966 subtree in place of the node under which this YANG extension 967 statement is used. 969 This extension takes one argument which specifies the name 970 of the mountpoint. 972 This extension can occur as a substatement underneath a 973 container statement, a list statement, or a case statement. 974 As a best practice, it SHOULD occur as statement only 975 underneath a container statement, but it MAY also occur 976 underneath a list or a case statement. 978 The extension can take two parameters, target and subtree, 979 each defined as their own YANG extensions. 981 For Alias-Mount, a mountpoint statement MUST contain a 982 subtree statement for the mountpoint definition to be valid. 983 For Peer-Mount, a mountpoint statement MUST contain both a 984 target and a subtree substatement for the mountpoint 985 definition to be valid. 987 The subtree SHOULD be specified in terms of a data node of 988 type 'mnt:subtree-ref'. The targeted data node MUST 989 represent a container. 991 The target system MAY be specified in terms of a data node 992 that uses the grouping 'mnt:mount-target'. However, it 993 can be specified also in terms of any other data node that 994 contains sufficient information to address the mount target, 995 such as an IP address, a host name, or a URI. 997 It is possible for the mounted subtree to in turn contain a 998 mountpoint. However, circular mount relationships MUST NOT 999 be introduced. For this reason, a mounted subtree MUST NOT 1000 contain a mountpoint that refers back to the mounting system 1001 with a mount target that directly or indirectly contains the 1002 originating mountpoint."; 1003 } 1005 extension target { 1006 argument target-name; 1007 description 1008 "This YANG extension is used to perform a Peer-Mount. 1009 It is used to specify a remote target system from which to 1010 mount a datastore subtree. This YANG 1011 extension takes one argument which specifies the remote 1012 system. In general, this argument will contain the name of 1013 a data node that contains the remote system information. It 1014 is recommended that the reference data node uses the 1015 mount-target grouping that is defined further below in this 1016 module. 1018 This YANG extension can occur only as a substatement below 1019 a mountpoint statement. It MUST NOT occur as a substatement 1020 below any other YANG statement."; 1021 } 1023 extension subtree { 1024 argument subtree-path; 1025 description 1026 "This YANG extension is used to specify a subtree in a 1027 datastore that is to be mounted. This YANG extension takes 1028 one argument which specifies the path to the root of the 1029 subtree. The root of the subtree SHOULD represent an 1030 instance of a YANG container. However, it MAY represent 1031 also another data node. 1033 This YANG extension can occur only as a substatement below 1034 a mountpoint statement. It MUST NOT occur as a substatement 1035 below any other YANG statement."; 1036 } 1038 feature mount-server-mgmt { 1039 description 1040 "Provide additional capabilities to manage remote mount 1041 points"; 1042 } 1044 typedef mount-status { 1045 type enumeration { 1046 enum "ok" { 1047 description 1048 "Mounted"; 1049 } 1050 enum "no-target" { 1051 description 1052 "The argument of the mountpoint does not define a 1053 target system"; 1054 } 1055 enum "no-subtree" { 1056 description 1057 "The argument of the mountpoint does not define a 1058 root of a subtree"; 1059 } 1060 enum "target-unreachable" { 1061 description 1062 "The specified target system is currently 1063 unreachable"; 1064 } 1065 enum "mount-failure" { 1066 description 1067 "Any other mount failure"; 1068 } 1069 enum "unmounted" { 1070 description 1071 "The specified mountpoint has been unmounted as the 1072 result of a management operation"; 1073 } 1074 } 1075 description 1076 "This type is used to represent the status of a 1077 mountpoint."; 1078 } 1080 typedef subtree-ref { 1081 type string; 1082 description 1083 "This string specifies a path to a datanode. It corresponds 1084 to the path substatement of a leafref type statement. Its 1085 syntax needs to conform to the corresponding subset of the 1086 XPath abbreviated syntax. Contrary to a leafref type, 1087 subtree-ref allows to refer to a node in a remote datastore. 1088 Also, a subtree-ref refers only to a single node, not a list 1089 of nodes."; 1090 } 1092 grouping mount-monitor { 1093 description 1094 "This grouping contains data nodes that indicate the 1095 current status of a mount point."; 1096 leaf mount-status { 1097 type mount-status; 1098 config false; 1099 description 1100 "Indicates whether a mountpoint has been successfully 1101 mounted or whether some kind of fault condition is 1102 present."; 1103 } 1104 } 1106 grouping mount-target { 1107 description 1108 "This grouping contains data nodes that can be used to 1109 identify a remote system from which to mount a datastore 1110 subtree."; 1111 container mount-target { 1112 description 1113 "A container is used to keep mount target information 1114 together."; 1115 choice target-address-type { 1116 mandatory true; 1117 description 1118 "Allows to identify mount target in different ways, 1119 i.e. using different types of addresses."; 1120 case IP { 1121 leaf target-ip { 1122 type inet:ip-address; 1123 description 1124 "IP address identifying the mount target."; 1126 } 1127 } 1128 case URI { 1129 leaf uri { 1130 type inet:uri; 1131 description 1132 "URI identifying the mount target"; 1133 } 1134 } 1135 case host-name { 1136 leaf hostname { 1137 type inet:host; 1138 description 1139 "Host name of mount target."; 1140 } 1141 } 1142 case node-ID { 1143 leaf node-info-ref { 1144 type subtree-ref; 1145 description 1146 "Node identified by named subtree."; 1147 } 1148 } 1149 case other { 1150 leaf opaque-target-ID { 1151 type string; 1152 description 1153 "Catch-all; could be used also for mounting 1154 of data nodes that are local."; 1155 } 1156 } 1157 } 1158 } 1159 } 1161 grouping mount-policies { 1162 description 1163 "This grouping contains data nodes that allow to configure 1164 policies associated with mountpoints."; 1165 leaf manual-mount { 1166 type empty; 1167 description 1168 "When present, a specified mountpoint is not 1169 automatically mounted when the mount data node is 1170 created, but needs to mounted via specific RPC 1171 invocation."; 1172 } 1173 leaf retry-timer { 1174 type uint16; 1175 units "seconds"; 1176 description 1177 "When specified, provides the period after which 1178 mounting will be automatically reattempted in case of a 1179 mount status of an unreachable target"; 1180 } 1181 leaf number-of-retries { 1182 type uint8; 1183 description 1184 "When specified, provides a limit for the number of 1185 times for which retries will be automatically 1186 attempted"; 1187 } 1188 } 1190 rpc mount { 1191 description 1192 "This RPC allows an application or administrative user to 1193 perform a mount operation. If successful, it will result in 1194 the creation of a new mountpoint."; 1195 input { 1196 leaf mountpoint-id { 1197 type string { 1198 length "1..32"; 1199 } 1200 description 1201 "Identifier for the mountpoint to be created. 1202 The mountpoint-id needs to be unique; 1203 if the mountpoint-id of an existing mountpoint is 1204 chosen, an error is returned."; 1205 } 1206 } 1207 output { 1208 leaf mount-status { 1209 type mount-status; 1210 description 1211 "Indicates if the mount operation was successful."; 1212 } 1213 } 1214 } 1215 rpc unmount { 1216 description 1217 "This RPC allows an application or administrative user to 1218 unmount information from a remote datastore. If successful, 1219 the corresponding mountpoint will be removed from the 1220 datastore."; 1221 input { 1222 leaf mountpoint-id { 1223 type string { 1224 length "1..32"; 1225 } 1226 description 1227 "Identifies the mountpoint to be unmounted."; 1228 } 1229 } 1230 output { 1231 leaf mount-status { 1232 type mount-status; 1233 description 1234 "Indicates if the unmount operation was successful."; 1235 } 1236 } 1237 } 1238 container mount-server-mgmt { 1239 if-feature mount-server-mgmt; 1240 description 1241 "Contains information associated with managing the 1242 mountpoints of a datastore."; 1243 container mountpoints { 1244 description 1245 "Keep the mountpoint information consolidated 1246 in one place."; 1247 list mountpoint { 1248 key "mountpoint-id"; 1249 description 1250 "There can be multiple mountpoints. 1251 Each mountpoint is represented by its own 1252 list element."; 1253 leaf mountpoint-id { 1254 type string { 1255 length "1..32"; 1256 } 1257 description 1258 "An identifier of the mountpoint. 1259 RPC operations refer to the mountpoint 1260 using this identifier."; 1261 } 1262 leaf mountpoint-origin { 1263 type enumeration { 1264 enum "client" { 1265 description 1266 "Mountpoint has been supplied and is 1267 manually administered by a client"; 1268 } 1269 enum "auto" { 1270 description 1271 "Mountpoint is automatically 1272 administered by the server"; 1273 } 1274 } 1275 config false; 1276 description 1277 "This describes how the mountpoint came 1278 into being."; 1279 } 1280 leaf subtree-ref { 1281 type subtree-ref; 1282 mandatory true; 1283 description 1284 "Identifies the root of the subtree in the 1285 target system that is to be mounted."; 1286 } 1287 uses mount-target; 1288 uses mount-monitor; 1289 uses mount-policies; 1290 } 1291 } 1292 container global-mount-policies { 1293 description 1294 "Provides mount policies applicable for all mountpoints, 1295 unless overridden for a specific mountpoint."; 1296 uses mount-policies; 1297 } 1298 } 1299 } 1301 1303 7. Security Considerations 1305 TBD 1307 8. Acknowledgements 1309 We wish to acknowledge the helpful contributions, comments, and 1310 suggestions that were received from Tony Tkacik, Ambika Tripathy, 1311 Robert Varga, Prabhakara Yellai, Shashi Kumar Bansal, Lukas Sedlak, 1312 and Benoit Claise. 1314 9. References 1316 9.1. Normative References 1318 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", 1319 RFC 2131, DOI 10.17487/RFC2131, March 1997, 1320 . 1322 [RFC2866] Rigney, C., "RADIUS Accounting", RFC 2866, 1323 DOI 10.17487/RFC2866, June 2000, 1324 . 1326 [RFC3768] Hinden, R., Ed., "Virtual Router Redundancy Protocol 1327 (VRRP)", RFC 3768, DOI 10.17487/RFC3768, April 2004, 1328 . 1330 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1331 Resource Identifier (URI): Generic Syntax", STD 66, 1332 RFC 3986, DOI 10.17487/RFC3986, January 2005, 1333 . 1335 [RFC6020] Bjorklund, M., Ed., "YANG - A Data Modeling Language for 1336 the Network Configuration Protocol (NETCONF)", RFC 6020, 1337 DOI 10.17487/RFC6020, October 2010, 1338 . 1340 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 1341 and A. Bierman, Ed., "Network Configuration Protocol 1342 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 1343 . 1345 [RFC6536] Bierman, A. and M. Bjorklund, "Network Configuration 1346 Protocol (NETCONF) Access Control Model", RFC 6536, 1347 DOI 10.17487/RFC6536, March 2012, 1348 . 1350 [RFC7223] Bjorklund, M., "A YANG Data Model for Interface 1351 Management", RFC 7223, DOI 10.17487/RFC7223, May 2014, 1352 . 1354 [RFC7923] Voit, E., Clemm, A., and A. Gonzalez Prieto, "Requirements 1355 for Subscription to YANG Datastores", RFC 7923, 1356 DOI 10.17487/RFC7923, June 2016, 1357 . 1359 9.2. Informative References 1361 [I-D.ietf-netconf-restconf] 1362 Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1363 Protocol", draft-ietf-netconf-restonf-16 (work in 1364 progress), August 2016. 1366 [I-D.ietf-netconf-yang-push] 1367 Clemm, A., Gonzalez Prieto, A., Voit, E., Tripathy, A., 1368 and E. Nilsen-Nygaard, "Subscribing to YANG datastore push 1369 updates", draft-ietf-netconf-yang-push-03 (work in 1370 progress), June 2016. 1372 [I-D.voit-netmod-yang-mount-requirements] 1373 Voit, E., Clemm, A., and S. Mertens, "Requirements for 1374 mounting of local and remote YANG subtrees", draft-voit- 1375 netmod-yang-mount-requirements-00 (work in progress), 1376 March 2016. 1378 Appendix A. Example 1380 In the following example, we are assuming the use case of a network 1381 controller that wants to provide a controller network view to its 1382 client applications. This view needs to include network abstractions 1383 that are maintained by the controller itself, as well as certain 1384 information about network devices where the network abstractions tie 1385 in with element-specific information. For this purpose, the network 1386 controller leverages the mount capability specified in this document 1387 and presents a fictitious Controller Network YANG Module that is 1388 depicted in the outlined structure below. The example illustrates 1389 how mounted information is leveraged by the mounting datastore to 1390 provide an additional level of information that ties together network 1391 and device abstractions, which could not be provided otherwise 1392 without introducing a (redundant) model to replicate those device 1393 abstractions 1395 rw controller-network 1396 +-- rw topologies 1397 | +-- rw topology [topo-id] 1398 | +-- rw topo-id node-id 1399 | +-- rw nodes 1400 | | +-- rw node [node-id] 1401 | | +-- rw node-id node-id 1402 | | +-- rw supporting-ne network-element-ref 1403 | | +-- rw termination-points 1404 | | +-- rw term-point [tp-id] 1405 | | +-- tp-id tp-id 1406 | | +-- ifref mountedIfRef 1407 | +-- rw links 1408 | +-- rw link [link-id] 1409 | +-- rw link-id link-id 1410 | +-- rw source tp-ref 1411 | +-- rw dest tp-ref 1412 +-- rw network-elements 1413 +-- rw network-element [element-id] 1414 +-- rw element-id element-id 1415 +-- rw element-address 1416 | +-- ... 1417 +-- M interfaces 1419 The controller network model consists of the following key 1420 components: 1422 o A container with a list of topologies. A topology is a graph 1423 representation of a network at a particular layer, for example, an 1424 IS-IS topology, an overlay topology, or an Openflow topology. 1425 Specific topology types can be defined in their own separate YANG 1426 modules that augment the controller network model. Those 1427 augmentations are outside the scope of this example 1429 o An inventory of network elements, along with certain information 1430 that is mounted from each element. The information that is 1431 mounted in this case concerns interface configuration information. 1432 For this purpose, each list element that represents a network 1433 element contains a corresponding mountpoint. The mountpoint uses 1434 as its target the network element address information provided in 1435 the same list element 1437 o Each topology in turn contains a container with a list of nodes. 1438 A node is a network abstraction of a network device in the 1439 topology. A node is hosted on a network element, as indicated by 1440 a network-element leafref. This way, the "logical" and "physical" 1441 aspects of a node in the network are cleanly separated. 1443 o A node also contains a list of termination points that terminate 1444 links. A termination point is implemented on an interface. 1445 Therefore, it contains a leafref that references the corresponding 1446 interface configuration which is part of the mounted information 1447 of a network element. Again, the distinction between termination 1448 points and interfaces provides a clean separation between logical 1449 concepts at the network topology level and device-specific 1450 concepts that are instantiated at the level of a network element. 1451 Because the interface information is mounted from a different 1452 datastore and therefore occurs at a different level of the 1453 containment hierarchy than it would if it were not mounted, it is 1454 not possible to use the interface-ref type that is defined in YANG 1455 data model for interface management [] to allow the termination 1456 point refer to its supporting interface. For this reason, a new 1457 type definition "mountedIfRef" is introduced that allows to refer 1458 to interface information that is mounted and hence has a different 1459 path. 1461 o Finally, a topology also contains a container with a list of 1462 links. A link is a network abstraction that connects nodes via 1463 node termination points. In the example, directional point-to- 1464 point links are depicted in which one node termination point 1465 serves as source, another as destination. 1467 The following is a YANG snippet of the module definition which makes 1468 use of the mountpoint definition. 1470 1471 module controller-network { 1472 namespace "urn:cisco:params:xml:ns:yang:controller-network"; 1473 // example only, replace with IANA namespace when assigned 1474 prefix cn; 1475 import mount { 1476 prefix mnt; 1477 } 1478 import interfaces { 1479 prefix if; 1480 } 1481 ... 1482 typedef mountedIfRef { 1483 type leafref { 1484 path "/cn:controller-network/cn:network-elements/" 1485 +"cn:network-element/cn:interfaces/if:interface/if:name"; 1486 // cn:interfaces corresponds to the mountpoint 1487 } 1488 } 1489 ... 1490 list termination-point { 1491 key "tp-id"; 1492 ... 1493 leaf ifref { 1494 type mountedIfRef; 1495 } 1496 ... 1497 list network-element { 1498 key "element-id"; 1499 leaf element-id { 1500 type element-ID; 1501 } 1502 container element-address { 1503 ... // choice definition that allows to specify 1504 // host name, 1505 // IP addresses, URIs, etc 1506 } 1507 mnt:mountpoint "interfaces" { 1508 mnt:target "./element-address"; 1509 mnt:subtree "/if:interfaces"; 1510 } 1511 ... 1512 } 1513 ... 1514 1516 Finally, the following contains an XML snippet of instantiated YANG 1517 information. We assume three datastores: NE1 and NE2 each have a 1518 datastore (the mount targets) that contains interface configuration 1519 data, which is mounted into NC's datastore (the mount client). 1521 Interface information from NE1 datastore: 1523 1524 1525 fastethernet-1/0 1526 ethernetCsmacd 1527 1/0 1528 1529 1530 fastethernet-1/1 1531 ethernetCsmacd 1532 1/1 1533 1534 1536 Interface information from NE2 datastore: 1537 1538 1539 fastethernet-1/0 1540 ethernetCsmacd 1541 1/0 1542 1543 1544 fastethernet-1/2 1545 ethernetCsmacd 1546 1/2 1547 1548 1550 NC datastore with mounted interface information from NE1 and NE2: 1552 1553 ... 1554 1555 1556 NE1 1557 .... 1558 1559 1560 fastethernet-1/0 1561 ethernetCsmacd 1562 1/0 1563 1564 1565 fastethernet-1/1 1566 ethernetCsmacd 1567 1/1 1568 1569 1570 1571 1572 NE2 1573 .... 1574 1575 1576 fastethernet-1/0 1577 ethernetCsmacd 1578 1/0 1579 1580 1581 fastethernet-1/2 1582 ethernetCsmacd 1583 1/2 1584 1585 1586 1587 1588 ... 1589 1591 Authors' Addresses 1593 Alexander Clemm 1594 Cisco Systems 1596 EMail: ludwig@clemm.org 1597 Jan Medved 1598 Cisco Systems 1600 EMail: jmedved@cisco.com 1602 Eric Voit 1603 Cisco Systems 1605 EMail: evoit@cisco.com