idnits 2.17.1 draft-clemm-netmod-mount-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 36 instances of too long lines in the document, the longest one being 3 characters in excess of 72. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 484: '... A mountpoint MUST be contained unde...' RFC 2119 keyword, line 492: '...the mount target SHOULD refer to a con...' RFC 2119 keyword, line 498: '... SHOULD be mounted....' RFC 2119 keyword, line 502: '... mountpoints MUST NOT introduce circ...' RFC 2119 keyword, line 503: '...ounted datastore MUST NOT contain a mo...' (17 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 564 has weird spacing: '...oint-id strin...' == Line 567 has weird spacing: '...rget-ip yang:...' == Line 569 has weird spacing: '... rw uri yang:...' == Line 571 has weird spacing: '...ostname yang:...' == Line 573 has weird spacing: '...nfo-ref mnt:s...' == (4 more instances...) == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 22, 2013) is 3863 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 6536 (Obsoleted by RFC 8341) == Outdated reference: A later version (-04) exists of draft-bierman-netconf-restconf-01 Summary: 4 errors (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clemm 3 Internet-Draft J. Medved 4 Intended status: Experimental E. Voit 5 Expires: March 26, 2014 Cisco Systems 6 September 22, 2013 8 Mounting YANG-Defined Information from Remote Datastores 9 draft-clemm-netmod-mount-01.txt 11 Abstract 13 This document introduces a new capability that allows YANG datastores 14 to reference and incorporate information from remote datastores. 15 This is accomplished using a new YANG data model that allows to 16 define and manage datastore mount points that reference data nodes in 17 remote datastores. The data model includes a set of YANG extensions 18 for the purposes of declaring such mount points. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on March 26, 2014. 37 Copyright Notice 39 Copyright (c) 2013 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 This document may contain material from IETF Documents or IETF 53 Contributions published or made publicly available before November 54 10, 2008. The person(s) controlling the copyright in some of this 55 material may not have granted the IETF Trust the right to allow 56 modifications of such material outside the IETF Standards Process. 57 Without obtaining an adequate license from the person(s) controlling 58 the copyright in such materials, this document may not be modified 59 outside the IETF Standards Process, and derivative works of it may 60 not be created outside the IETF Standards Process, except to format 61 it for publication as an RFC or to translate it into languages other 62 than English. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 67 2. Definitions and Acronyms . . . . . . . . . . . . . . . . . . 5 68 3. Example scenarios . . . . . . . . . . . . . . . . . . . . . . 6 69 3.1. Network controller view . . . . . . . . . . . . . . . . . 6 70 3.2. Distributed network configuration . . . . . . . . . . . . 8 71 4. Operating on mounted data . . . . . . . . . . . . . . . . . . 9 72 5. Data model structure . . . . . . . . . . . . . . . . . . . . 10 73 5.1. YANG mountpoint extensions . . . . . . . . . . . . . . . 10 74 5.2. Mountpoint management . . . . . . . . . . . . . . . . . . 11 75 5.3. YANG structure diagrams . . . . . . . . . . . . . . . . . 13 76 5.4. Other considerations . . . . . . . . . . . . . . . . . . 13 77 5.4.1. Authorization . . . . . . . . . . . . . . . . . . . . 13 78 5.4.2. Datastore qualification . . . . . . . . . . . . . . . 14 79 5.4.3. Local mounting . . . . . . . . . . . . . . . . . . . 14 80 5.4.4. Mount cascades . . . . . . . . . . . . . . . . . . . 14 81 5.4.5. Implementation considerations . . . . . . . . . . . . 15 82 6. Datastore mountpoint YANG module . . . . . . . . . . . . . . 16 83 7. Security Considerations . . . . . . . . . . . . . . . . . . . 23 84 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 85 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 86 9.1. Normative References . . . . . . . . . . . . . . . . . . 23 87 9.2. Informative References . . . . . . . . . . . . . . . . . 23 88 Appendix A. Example . . . . . . . . . . . . . . . . . . . . . . 24 90 1. Introduction 92 This document introduces a new capability that allows YANG datastores 93 [RFC6020] to incorporate and reference information from remote 94 datastores. This is provided by introducing a mountpoint concept. 95 This concept allows to declare a YANG data node as a "mount point", 96 under which a remote datastore subtree can be mounted. To the user 97 of the primary datastore, the remote information appears as an 98 integral part of the datastore. It allows remote data nodes and 99 datastore subtrees to be inserted into the local data hierarchy, 100 arranged below local data nodes. The concept is reminiscent of 101 concepts in a Network File System that allows to mount remote folders 102 and make them appear as if they were contained in the local file 103 system of the user's machine. 105 The ability to mount information from remote datastores is new and 106 not covered by existing YANG mechanisms. Until now, management 107 information provided in a datastore has been intrinsically tied to 108 the same server. In contrast, the capability introduced here allows 109 the server to represent information from remote systems as if it were 110 its own and contained in its own local data hierarchy. 112 YANG does provide means by which modules that have been separately 113 defined can reference and augment one another. YANG also does 114 provide means to specify data nodes that reference other data nodes. 115 However, all the data is assumed to be instantiated as part of the 116 same datastore, for example a datastore provided through a NETCONF 117 server [RFC6241]. Existing YANG mechanisms do not account for the 118 possibility that some information that needs to be referred not only 119 resides in a different subtree of the same datastore, or was defined 120 in a separate module that is also instantiated in the same datastore, 121 but that is genuinely part of a different datastore that is provided 122 by a different server. 124 The ability to mount data from remote datastores is useful to address 125 various problems that several categories of applications are faced 126 with: 128 One category of applications that can leverage this capability 129 concerns network controller applications that need to present a 130 consolidated view of management information in datastores across a 131 network. Controller applications are faced with the problem that in 132 order to expose information, that information needs to be part of 133 their own datastore. Today, this requires support of a corresponding 134 YANG data module. In order to expose information that concerns other 135 network elements, that information has to be replicated into the 136 controller's own datastore in the form of data nodes that may mirror 137 but are clearly distinct from corresponding data nodes in the network 138 element's datastore. In addition, in many cases, a controller needs 139 to impose its own hierarchy on the data that is different from the 140 one that was defined as part of the original module. An example for 141 this concerns interface configuration data, which would be contained 142 in a top-level container in a network element datastore, but may need 143 to be contained in a list in a controller datastore in order to be 144 able to distinguish instances from different network elements under 145 the controller's scope. This in turn would require introduction of 146 redundant YANG modules that effectively replicate the same 147 information save for differences in hierarchy. 149 By directly mounting information from network element datastores, the 150 controller does not need to replicate the same information from 151 multiple datastores, nor does it need to re-define any network 152 element and system-level abstractions to be able to put them in the 153 context of network abstractions. Instead, the subtree of the remote 154 system is attached to the local mount point. Operations that need to 155 access data below the mount point are in effect transparently 156 redirected to remote system, which is the authorative owner of the 157 data. The mounting system does not even necessarily need to be aware 158 of the specific data in the remote subtree. 160 A second category of applications concerns decentralized networking 161 applications that require globally consistent configuration of 162 parameters. When each network element maintains its own datastore 163 with the same configurable settings, a single global change requires 164 modifying the same information in many network elements across a 165 network. In case of inconsistent configurations, network failures 166 can result that are difficult to troubleshoot. In many cases, what 167 is more desirable is the ability to configure such settings in a 168 single place, then make them available to every network element. 169 Today, this requires in general the introduction of specialized 170 servers and configuration options outside the scope of NETCONF, such 171 as RADIUS [RFC2866] or DHCP [RFC2131]. In order to address this 172 within the scope of NETCONF and YANG, the same information would have 173 to be redundantly modeled and maintained, representing operational 174 data (mirroring some remote server) on some network elements and 175 configuration data on a designated master. Either way, additional 176 complexity ensues. 178 Instead of replicating the same global parameters across different 179 datastores, the solution presented in this document allows a single 180 copy to be maintained in a subtree of single datastore that is then 181 mounted by every network element that requires access to these 182 parameters. The global parameters can be hosted in a controller or a 183 designated network element. This considerably simplifies the 184 management of such parameters that need to be known across elements 185 in a network and require global consistency. 187 The capability of allowing to mount information from remote 188 datastores into another datastore is accomplished by a set of YANG 189 extensions that allow to define such mount points. For this purpose, 190 a new YANG module is introduced. The module defines the YANG 191 extensions, as well as a data model that can be used to manage the 192 mountpoints and mounting process itself. Only the mounting module 193 and server needs to be aware of the concepts introduced here. 194 Mounting is transparent to the models being mounted; any YANG model 195 can be mounted. 197 2. Definitions and Acronyms 199 Data node: An instance of management information in a YANG datastore. 201 DHCP: Dynamic Host Configuration Protocol. 203 Datastore: A conceptual store of instantiated management information, 204 with individual data items represented by data nodes which are 205 arranged in hierarchical manner. 207 Data subtree: An instantiated data node and the data nodes that are 208 hierarchically contained within it. 210 Mount client: The system at which the mount point resides, into which 211 the remote subtree is mounted. 213 Mount point: A data node that receives the root node of the remote 214 datastore being mounted. 216 Mount server: The server with which the mount client communicates and 217 which provides the mount client with access to the mounted 218 information. Can be used synonymously with mount target. 220 Mount target: A remote server whose datastore is being mounted. 222 NACM: NETCONF Access Control Model 224 NETCONF: Network Configuration Protocol 226 RADIUS: Remote Authentication Dial In User Service. 228 RPC: Remote Procedure Call 230 Remote datastore: A datastore residing at a remote node. 232 URI: Uniform Resource Identifier 234 YANG: A data definition language for NETCONF 236 3. Example scenarios 238 The following example scenarios outline some of the ways in which the 239 ability to mount YANG datastores can be applied. Other mount 240 topologies can be conceived in addition to the ones presented here. 242 3.1. Network controller view 244 Network controllers can use the mounting capability to present a 245 consolidated view of management information across the network. This 246 allows network controllers to not only expose network abstractions, 247 such as topologies or paths, but also network element abstractions, 248 such as information about a network element's interfaces, from one 249 consolidated place. 251 While an application on top of a controller could in theory also 252 bypass the controller to access network elements directly for 253 network-element abstractions, this would come at the expense of added 254 inconvenience for the client application. In addition, it would 255 compromise the ability to provide layered architectures in which 256 access to the network by controller applications is truly channeled 257 through the controller. 259 Without a mounting capability, a network controller would need to at 260 least conceptually replicate data from network elements to provide 261 such a view, incorporating network element information into its own 262 controller model that is separate from the network element's, 263 indicating that the information in the controller model is to be 264 populated from network elements. This can introduce issues such as 265 data consistency and staleness. Even more importantly, it would in 266 general lead to the redundant definition of data models: one model 267 that is implemented by the network element itself, and another model 268 to be implemented by the network controller. This leads to poor 269 maintainability, as analogous information has to be redundantly 270 defined and implemented across different data models. In general, 271 controllers cannot simply support the same modules as their network 272 elements for the same information because that information needs to 273 be put into a different context. This leads to "node"-information 274 that needs to be instantiated and indexed differently, because there 275 are multiple instances across different data stores. 277 For example, "system"-level information of a network element would 278 most naturally placed into a top-level container at that network 279 element's datastore. At the same time, the same information in the 280 context of the overall network, such as maintained by a controller, 281 might better be provided in a list. For example, the controller 282 might maintain a list with a list element for each network element, 283 underneath which the network element's system-level information is 284 contained. However, the containment structure of data nodes in a 285 module, once defined, cannot be changed. This means that in the 286 context of a network controller, a second module that repeats the 287 same system-level information would need to be defined, implemented, 288 and maintained. Any augmentations that add additional system-level 289 information to the original module will likewise need to be 290 redundantly defined, once for the "system" module, a second time for 291 the "controller" module. 293 By allowing a network controller to directly mount information from 294 network element datastores, the controller does not need to replicate 295 the same information from multiple datastores. Perhaps even more 296 importantly, the need to re-define any network element and system- 297 level abstractions to be able to put them in the context of network 298 abstractions is avoided. In this solution, a network controller's 299 datastore mounts information from many network element datastores. 300 For example, the network controller datastore could implement a list 301 in which each list element contains a mountpoint. Each mountpoint 302 mounts a subtree from a different network element's datastore. 304 This scenario is depicted in Figure 1. In the figure, M1 is the 305 mountpoint for the datastore in Network Element 1 and M2 is the 306 mountpoint for the datastore in Network Element 2. MDN1 is the 307 mounted data node in Network Element 1, and MDN2 is the mounted data 308 node in Network Element 2. 310 +-------------+ 311 | Network | 312 | Controller | 313 | Datastore | 314 | | 315 | +--N10 | 316 | +--N11 | 317 | +--N12 | 318 | +--M1******************************* 319 | +--M2****** * 320 | | * * 321 +-------------+ * * 322 * +---------------+ * +---------------+ 323 * | +--N1 | * | +--N5 | 324 * | +--N2 | * | +--N6 | 325 ********> +--MDN2 | *********> +--MDN1 | 326 | +--N3 | | +--N7 | 327 | +--N4 | | +--N8 | 328 | | | | 329 | Network | | Network | 330 | Element | | Element | 331 | Datastore | | Datastore | 332 +---------------+ +---------------+ 334 Figure 1: Network controller mount topology 336 3.2. Distributed network configuration 338 A second category of applications concerns decentralized networking 339 applications that require globally consistent configuration of 340 parameters that need to be known across elements in a network. 341 Today, the configuration of such parameters is generally performed on 342 a per network element basis, which is not only redundant but, more 343 importantly, error-prone. Inconsistent configurations lead to 344 erroneous network behavior that can be challenging to troubleshoot. 346 Using the ability to mount information from remote datastores opens 347 up a new possibility for managing such settings. Instead of 348 replicating the same global parameters across different datastores, a 349 single copy is maintained in a subtree of single datastore. This 350 datastore can hosted in a controller or a designated network element. 351 The subtree is subsequently mounted by every network element that 352 requires access to these parameters. 354 In many ways, this category of applications is an inverse of the 355 previous category: Whereas in the network controller case data from 356 many different datastores would be mounted into the same datastore 357 with multiple mountpoints, in this case many elements, each with 358 their own datastore, mount the same remote datastore, which is then 359 mounted by many different systems. 361 The scenario is depicted in Figure 2. In the figure, M1 is the 362 mountpoint for the Network Controller datastore in Network Element 1 363 and M2 is the mountpoint for the Network Controller datastore in 364 Network Element 2. MDN is the mounted data node in the Network 365 Controller datastore that contains the data nodes that represent the 366 shared configuration settings. 368 +---------------+ +---------------+ 369 | Network | | Network | 370 | Element | | Element | 371 | Datastore | | Datastore | 372 | | | | 373 | +--N1 | | +--N5 | 374 | | +--N2 | | | +--N6 | 375 | | +--N2 | | | +--N6 | 376 | | +--N3 | | | +--N7 | 377 | | +--N4 | | | +--N8 | 378 | | | | | | 379 | +--M1 | | +--M2 | 380 +-----*---------+ +-----*---------+ 381 * * +---------------+ 382 * * | | 383 * * | +--N10 | 384 * * | +--N11 | 385 *********************************************> +--MDN | 386 | +--N20 | 387 | +--N21 | 388 | ... | 389 | +--N22 | 390 | | 391 | Network | 392 | Controller | 393 | Datastore | 394 +---------------+ 396 Figure 2: Distributed config settings topology 398 4. Operating on mounted data 400 This section provides a rough illustration of the operations flow 401 involving mounted datastores. 403 The first thing that should be noted about these operations flows 404 concerns that a mount client essentially constitutes a special 405 management application that interacts with a remote system. To the 406 remote system, the mount client constitutes in effect just another 407 application. The remote system is the authorative owner of the data. 408 While it is conceivable that the remote system (or an application 409 that proxies for the remote system) provides certain functionality to 410 facilitate the specific needs of the mount client, the fact that 411 another system decides to expose a certain "view" of that data is 412 fundamentally not its concern. 414 When a client makes a request to a server that involves data that is 415 mounted from a remote system, the server will effectively act as a 416 proxy to the remote system on the client's behalf. It will extract 417 from the request the portion that involves the mounted subtree from 418 the remote system. It will strip that portion of the local context, 419 i.e. remove any local data paths and insert the data path of the 420 mounted remote subtree, as appropriate. The server will then forward 421 the transposed request to the remote system that is the authorative 422 owner of the mounted data. Upon receiving the reply, the server will 423 transpose the results into the local context as needed, for example 424 map the data paths into the local data tree structure, and combine 425 those results with the results of the remainder portion of the 426 original request. 428 In the simplest and at the same time perhaps the most common case, 429 the request will involve simple data retrieval. In that case, a 430 "get" or "get-configuration" operation might be applied on a subtree 431 whose scope includes a mount point. When resolving the mount point, 432 the server issues its own "get" or "get-configuration" request 433 against the remote system's subtree that is attached to the mount 434 point. The returned information is then inserted into the data 435 structure that is in turn returned to the client that originally 436 invoked the request. 438 Requests that involve editing of information and "writing through" to 439 remote systems are more complicated, particularly where they involve 440 the need for transactions and locking. While not our primary concern 441 at this time, implications are briefly discussed in section 442 Section 5.4.5. 444 Since mounted information involves in general communication with a 445 remote system, there is a possibility that the remote system does not 446 respond within a certain amount of time, that connectivity is lost, 447 or that other errors occur. Accordingly, the ability to mount 448 datastores also involves mountpoint management, which includes the 449 ability to configure timeouts, retries, and management of mountpoint 450 state (including dynamic addition removal of mountpoints). 452 As a final note, it is conceivable that caching schemes are 453 introduced. Caching can increase performance and efficiency in 454 certain scenarios (for example, in the case of data that is 455 frequently read but that rarely changes), but increases 456 implementation complexity. Whether to perform caching is purely a 457 local implementation decision. This specification has not 458 requirement that caching be introduced and makes no corresponding 459 assumptions; there is no dependency on any caching scheme. 461 5. Data model structure 463 5.1. YANG mountpoint extensions 465 At the center of the module is a set of YANG extensions that allow to 466 define a mountpoint. 468 o The first extension, "mountpoint", is used to declare a 469 mountpoint. The extension takes the name of the mountpoint as an 470 argument. 472 o The second extension, "target", serves as a substatement 473 underneath a mountpoint statement. It takes an argument that 474 identifies the target system. The argument is a reference to a 475 data node that contains the information that is needed to identify 476 and address a remote server, such as an IP address, a host name, 477 or a URI [RFC3986]. 479 o The third extension, "subtree", also serves as substatement 480 underneath a mountpoint statement. It takes an argument that 481 defines the root node of the datastore subtree that is to be 482 mounted, specified as string that contains a path expression. 484 A mountpoint MUST be contained underneath a container. Future 485 revisions might allow for mountpoints to be contained underneath 486 other data nodes, such as lists, leaf-lists, and cases. However, to 487 keep things simple, at this point mounting is only allowed directly 488 underneath a container. 490 Only a single data node can be mounted at one time. While the mount 491 target could refer to any data node, it is recommended that as a best 492 practice, the mount target SHOULD refer to a container. It is 493 possibly to maintain e.g. a list of mount points, with each mount 494 point each of which has a mount target an element of a remote list. 495 However, to avoid unnecessary proliferation of the number of mount 496 points and associated management overhead, in order to mount lists or 497 leaf-lists, a container containing the list respectively leaf-list 498 SHOULD be mounted. 500 It is possible for a mounted datastore to contain another mountpoint, 501 thus leading to several levels of mount indirections. However, 502 mountpoints MUST NOT introduce circular dependencies. In particular, 503 a mounted datastore MUST NOT contain a mountpoint which specifies the 504 mounting datastore as a target and a subtree which contains as root 505 node a data node that in turn contains the original mountpoint. 506 Whenever a mount operation is performed, this condition MUST be 507 validated by the mount client. 509 5.2. Mountpoint management 511 The YANG module contains facilities to manage the mountpoints 512 themselves. 514 For this purpose, a list of the mountpoints is introduced. Each list 515 element represents a single mountpoint. It includes an 516 identification of the mount target, i.e. the remote system hosting 517 the remote datastore and a definition of the subtree of the remote 518 data node being mounted. It also includes monitoring information 519 about current status (indicating whether the mount has been 520 successful and is operational, or whether an error condition applies 521 such as the target being unreachable or referring to an invalid 522 subtree). 524 In addition to the list of mountpoints, a set of global mount policy 525 settings allows to set parameters such as mount retries and timeouts. 527 Each mountpoint list element also contains a set of the same 528 configuration knobs, allowing administrators to override global mount 529 policies and configure mount policies on a per-mountpoint basis if 530 needed. 532 There are two ways how mounting occurs: automatic (dynamically 533 performed as part of system operation) or manually (administered by a 534 user or client application). A separate mountpoint-origin object is 535 used to distinguish between manually configured and automatically 536 populated mountpoints. 538 When configured automatically, mountpoint information is 539 automatically populated by the datastore that implements the 540 mountpoint. The precise mechanisms for discovering mount targets and 541 bootstrapping mount points are provided by the mount client 542 infrastructure and outside the scope of this specification. 543 Likewise, when a mountpoint should be deleted and when it should 544 merely have its mount-status indicate that the target is unreachable 545 is a system-specific implementation decision. 547 Manual mounting consists of two steps. In a first step, a mountpoint 548 is manually configured by a user or client application through 549 administrative action. Once a mountpoint has been configured, actual 550 mounting occurs through an RPCs that is defined specifically for that 551 purpose. To unmount, a separate RPC is invoked; mountpoint 552 configuration information needs to be explicitly deleted. 554 The structure of the mountpoint management data model is depicted in 555 the following figure, where brackets enclose list keys, "rw" means 556 configuration, "ro" operational state data, and "?" designates 557 optional nodes. Parantheses enclose choice and case nodes. The 558 figure does not depict all definitions; it is intended to illustrate 559 the overall structure. 561 rw mount-server-mgmt 562 +-- rw mountpoints 563 | +-- rw mountpoint [mountpoint-id] 564 | +-- rw mountpoint-id string 565 | +-- rw mount-target 566 | | +--: (IP) 567 | | | +-- rw target-ip yang:ip-address 568 | | +--: (URI) 569 | | | +-- rw uri yang:uri 570 | | +--: (host-name) 571 | | | +-- rw hostname yang:host 572 | | +-- (node-ID) 573 | | | +-- rw node-info-ref mnt:subtree-ref 574 | | +-- (other) 575 | | +-- rw opaque-target-id string 576 | +-- rw subtree-ref mnt:subtree-ref 577 | +-- ro mountpoint-origin enumeration 578 | +-- ro mount-status mnt:mount-status 579 | +-- rw manual-mount? empty 580 | +-- rw retry-timer? uint16 581 | +-- rw number-of-retries? uint8 582 +-- rw global-mount-policies 583 +-- rw manual-mount? empty 584 +-- rw retry-time? uint16 585 +-- rw number-of-retries? uint8 587 5.3. YANG structure diagrams 589 YANG data model structure overviews have proven very useful to convey 590 the "Big Picture". It would be useful to indicate in YANG data model 591 structure overviews the fact that a given data node serves as a 592 mountpoint. We propose for this purpose also a corresponding 593 extension to the structure representation convention. Specifically, 594 we propose to prefix the name of the mounting data node with upper- 595 case 'M'. 597 rw network 598 +-- rw nodes 599 +-- rw node [node-ID] 600 +-- rw node-ID 601 +-- M node-system-info 603 5.4. Other considerations 605 5.4.1. Authorization 607 Whether a mount client is allowed to modify information in a mounted 608 datastore or only retrieve it and whether there are certain data 609 nodes or subtrees within the mounted information for which access is 610 restricted is subject to authorization rules. To the mounted system, 611 a mounting client will in general appear like any other client. 612 Authorization privileges for remote mounting clients need to be 613 specified through NACM (NETCONF Access Control Model) [RFC6536]. 615 Users and implementers need to be aware of certain issues when 616 mounted information is modified, not just retrieved. Specifically, 617 in certain corner cases validation of changes made to mounted data 618 may involve constraints that involve information that is not visible 619 to the mounting datastore. This means that in such cases the reason 620 for validation failures may not always be fully understood by the 621 mounting system. 623 Likewise, if the concepts of transactions and locking are applied at 624 the mounting system, these concepts will need to be applied across 625 multiple systems, not just across multiple data nodes within the same 626 system. This capability may not be supported by every 627 implementation. For example, locking a datastore that contains a 628 mountpoint requires that the mount client obtains corresponding locks 629 on the mounted datastore as needed. Any request to acquire a lock on 630 a configuration subtree that includes a mountpoint MUST NOT be 631 granted if the mount client fails to obtain a corresponding lock on 632 the mounted system. Likewise, in case transactions are supported by 633 the mounting system, but not the target system, requests to acquire a 634 lock on a configuration subtree that includes a mountpoint MUST NOT 635 be granted. 637 5.4.2. Datastore qualification 639 It is conceivable to differentiate between different datastores on 640 the remote server, that is, to designate the name of the actual 641 datastore to mount, e.g. "running" or "startup". However, for the 642 purposes of this spec, we assume that the datastore to be mounted is 643 generally implied. Mounted information is treated as analogous to 644 operational data; in general, this means the running or "effective" 645 datastore is the target. That said, the information which targets to 646 mount does constitute configuration and can hence be part of a 647 startup or candidate datastore. 649 5.4.3. Local mounting 651 It is conceivable that the mount target does not reside in a remote 652 datastore, but that data nodes in the same datastore as the 653 mountpoint are targeted for mounting. This amounts to introducing an 654 "aliasing" capability in a datastore. While this is not the scenario 655 that is primarily targeted, it is supported and there may be valid 656 use cases for it. 658 5.4.4. Mount cascades 659 It is possible for the mounted subtree to in turn contain a 660 mountpoint. However, circular mount relationships MUST NOT be 661 introduced. For this reason, a mounted subtree MUST NOT contain a 662 mountpoint that refers back to the mounting system with a mount 663 target that directly or indirectly contains the originating 664 mountpoint. As part of a mount operation, the mount points of the 665 mounted system need to be checked accordingly. 667 5.4.5. Implementation considerations 669 Implementation specifics are outside the scope of this specification. 670 That said, the following considerations apply: 672 Systems that wish to mount information from remote datastores need to 673 implement a mount client. The mount client communicates with a 674 remote system to access the remote datastore. To do so, there are 675 several options: 677 o The mount client acts as a NETCONF client to a remote system. 678 Alternatively, another interface to the remote system can be used, 679 such as a REST API using JSON encodings, as specified in 680 [I-D.bierman-netconf-restconf]. Either way, to the remote system, 681 the mount client constitutes essentailly a client application like 682 any other. The mount client in effect IS a special kind of client 683 application. 685 o The mount client communicates with a remote mount server through a 686 separate protocol. The mount server is deployed on the same 687 system as the remote NETCONF datastore and interacts with it 688 through a set of local APIs. 690 o The mount client communicates with a remote mount server that acts 691 as a NETCONF client proxy to a remote system, on the client's 692 behalf. The communication between mount client and remote mount 693 server might involve a separate protocol, which is translated into 694 NETCONF operations by the remote mount server. 696 It is the responsibility of the mount client to manage the 697 association with the target system, e.g. validate it is still 698 reachable by maintaining a permanent association, perform 699 reachability checks in case of a connectionless transport, etc. 701 It is the responsibility of the mount client to manage the 702 mountpoints. This means that the mount client needs to populate the 703 mountpoint monitoring information (e.g. keep mount-status up to data 704 and determine in the case of automatic mounting when to add and 705 remove mountpoint configuration). In the case of automatic mounting, 706 the mount client also interacts with the mountpoint discovery and 707 bootstrap process. 709 The mount client needs to also participate in servicing datastore 710 operations involving mounted information. An operation requested 711 involving a mountpoint is relayed by the mounting system's 712 infrastructure to the mount client. For example, a request to 713 retrieve information from a datastore leads to an invocation of an 714 internal mount client API when a mount point is reached. The mount 715 client then relays a corresponding operation to the remote datastore. 716 It subsequently relays the result along with any responses back to 717 the invoking infrastructure, which then merges the result (e.g. a 718 retrieved subtree with the rest of the information that was 719 retrieved) as needed. Relaying the result may involve the need to 720 transpose error response codes in certain corner cases, e.g. when 721 mounted information could not be reached due to loss of connectivity 722 with the remote server, or when a configuration request failed due to 723 validation error. 725 6. Datastore mountpoint YANG module 727 728 file "mount@2013-09-22.yang" 729 module mount { 730 namespace "urn:cisco:params:xml:ns:yang:mount"; 731 // replace with IANA namespace when assigned 733 prefix mnt; 735 import ietf-yang-types { 736 prefix yang; 737 } 739 organization 740 "IETF NETMOD (NETCONF Data Modeling Language) Working Group"; 742 contact 743 "WG Web: http://tools.ietf.org/wg/netmod/ 744 WG List: netmod@ietf.org 746 WG Chair: David Kessens 747 david.kessens@nsn.com 748 WG Chair: Juergen Schoenwaelder 749 j.schoenwaelder@jacobs-university.de 751 Editor: Alexander Clemm 752 alex@cisco.com"; 754 description 755 "This module provides a set of YANG extensions and definitions 756 that can be used to mount information from remote datastores."; 758 revision 2013-09-22 { 759 description "Initial revision."; 760 } 762 feature mount-server-mgmt { 763 description 764 "Provide additional capabilities to manage remote mount 765 points"; 766 } 768 extension mountpoint { 769 description 770 "This YANG extension is used to mount data from a remote 771 system in place of the node under which this YANG extension 772 statement is used. 774 This extension takes one argument which specifies the name 775 of the mountpoint. 777 This extension can occur as a substatement underneath a 778 container statement, a list statement, or a case statement. 779 As a best practice, it SHOULD occur as statement only 780 underneath a container statement, but it MAY also occur 781 underneath a list or a case statement. 783 The extension takes two parameters, target and subtree, each 784 defined as their own YANG extensions. 785 A mountpoint statement MUST contain a target and a subtree 786 substatement for the mountpoint definition to be valid. 788 The target system MAY be specified in terms of a data node 789 that uses the grouping 'mnt:mount-target'. However, it 790 can be specified also in terms of any other data node that 791 contains sufficient information to address the mount target, 792 such as an IP address, a host name, or a URI. 794 The subtree SHOULD be specified in terms of a data node of 795 type 'mnt:subtree-ref'. The targeted data node MUST 796 represent a container. 798 It is possible for the mounted subtree to in turn contain a 799 mountpoint. However, circular mount relationships MUST NOT 800 be introduced. For this reason, a mounted subtree MUST NOT 801 contain a mountpoint that refers back to the mounting system 802 with a mount target that directly or indirectly contains the 803 originating mountpoint."; 805 argument "name"; 806 } 808 extension target { 809 description 810 "This YANG extension is used to specify a remote target 811 system from which to mount a datastore subtree. This YANG 812 extension takes one argument which specifies the remote 813 system. In general, this argument will contain the name of 814 a data node that contains the remote system information. It 815 is recommended that the reference data node uses the 816 mount-target grouping that is defined further below in this 817 module. 819 This YANG extension can occur only as a substatement below 820 a mountpoint statement. It MUST NOT occur as a substatement 821 below any other YANG statement."; 823 argument "target-name"; 824 } 826 extension subtree { 827 description 828 "This YANG extension is used to specify a subtree in a 829 datastore that is to be mounted. This YANG extension takes 830 one argument which specifies the path to the root of the 831 subtree. The root of the subtree SHOULD represent an 832 instance of a YANG container. However, it MAY represent 833 also another data node. 835 This YANG extension can occur only as a substatement below 836 a mountpoint statement. It MUST NOT occur as a substatement 837 below any other YANG statement."; 839 argument "subtree-path"; 840 } 842 typedef mount-status { 843 description 844 "This type is used to represent the status of a 845 mountpoint."; 846 type enumeration { 847 enum ok; { 848 description 849 "Mounted"; 850 } 851 enum no-target { 852 description 853 "The argument of the mountpoint does not define a 854 target system"; 855 } 856 enum no-subtree { 857 description 858 "The argument of the mountpoint does not define a 859 root of a subtree"; 860 } 861 enum target-unreachable { 862 description 863 "The specified target system is currently 864 unreachable"; 865 } 866 enum mount-failure { 867 description 868 "Any other mount failure"; 869 } 870 enum unmounted { 871 description 872 "The specified mountpoint has been unmounted as the 873 result of a management operation"; 874 } 875 } 876 } 877 typedef subtree-ref { 878 type string; // string pattern to be defined 879 description 880 "This string specifies a path to a datanode. It corresponds 881 to the path substatement of a leafref type statement. Its 882 syntax needs to conform to the corresponding subset of the 883 XPath abbreviated syntax. Contrary to a leafref type, 884 subtree-ref allows to refer to a node in a remote datastore. 885 Also, a subtree-ref refers only to a single node, not a list 886 of nodes."; 887 } 888 rpc mount { 889 description 890 "This RPC allows an application or administrative user to 891 perform a mount operation. If successful, it will result in 892 the creation of a new mountpoint."; 893 input { 894 leaf mountpoint-id { 895 type string { 896 length "1..32"; 897 } 898 } 899 } 900 output { 901 leaf mount-status { 902 type mount-status; 903 } 904 } 905 } 906 rpc unmount { 907 "This RPC allows an application or administrative user to 908 unmount information from a remote datastore. If successful, 909 the corresponding mountpoint will be removed from the 910 datastore."; 911 input { 912 leaf mountpoint-id { 913 type string { 914 length "1..32"; 915 } 916 } 917 } 918 output { 919 leaf mount-status { 920 type mount-status; 921 } 922 } 923 } 924 grouping mount-monitor { 925 leaf mount-status { 926 description 927 "Indicates whether a mountpoint has been successfully 928 mounted or whether some kind of fault condition is 929 present."; 930 type mount-status; 931 config false; 932 } 933 } 934 grouping mount-target { 935 description 936 "This grouping contains data nodes that can be used to 937 identify a remote system from which to mount a datastore 938 subtree."; 939 container mount-target { 940 choice target-address-type { 941 mandatory; 942 case IP { 943 leaf target-ip { 944 type yang:ip-address; 945 } 946 case URI { 947 leaf uri { 948 type yang:uri; 949 } 950 } 951 case host-name { 952 leaf hostname { 953 type yang:host; 954 } 955 } 956 case node-ID { 957 leaf node-info-ref { 958 type subtree-ref; 959 } 960 } 961 case other { 962 leaf opaque-target-ID { 963 type string; 964 description 965 "Catch-all; could be used also for mounting 966 of data nodes that are local."; 967 } 968 } 969 } 970 } 971 } 972 grouping mount-policies { 973 description 974 "This grouping contains data nodes that allow to configure 975 policies associated with mountpoints."; 976 leaf manual-mount { 977 type empty; 978 description 979 "When present, a specified mountpoint is not 980 automatically mounted when the mount data node is 981 created, but needs to mounted via specific RPC 982 invocation."; 983 } 984 leaf retry-timer { 985 type uint16; 986 units "seconds"; 987 description 988 "When specified, provides the period after which 989 mounting will be automatically reattempted in case of a 990 mount status of an unreachable target"; 991 } 992 leaf number-of-retries { 993 type uint8; 994 description 995 "When specified, provides a limit for the number of 996 times for which retries will be automatically 997 attempted"; 998 } 999 } 1001 container mount-server-mgmt { 1002 if-feature mount-server-mgmt; 1003 container mountpoints { 1004 list mountpoint { 1005 key "mountpoint-id"; 1007 leaf mountpoint-id { 1008 type string { 1009 length "1..32"; 1010 } 1011 } 1012 leaf mountpoint-origin { 1013 type enumeration { 1014 enum client { 1015 description 1016 "Mountpoint has been supplied and is 1017 manually administered by a client"; 1018 } 1019 enum auto { 1020 description 1021 "Mountpoint is automatically 1022 administered by the server"; 1023 } 1024 config false; 1025 } 1026 } 1027 uses mount-target; 1028 leaf subtree-ref { 1029 type subtree-ref; 1030 mandatory; 1031 } 1032 uses mount-monitor; 1033 uses mount-policies; 1034 } 1035 } 1036 container global-mount-policies { 1037 uses mount-policies; 1038 description 1039 "Provides mount policies applicable for all mountpoints, 1040 unless overridden for a specific mountpoint."; 1041 } 1042 } 1043 } 1044 1046 7. Security Considerations 1048 TBD 1050 8. Acknowledgements 1052 We wish to acknowledge the helpful contributions, comments, and 1053 suggestions that were received from Tony Tkacik, Robert Varga, Lukas 1054 Sedlak, and Benoit Claise. 1056 9. References 1058 9.1. Normative References 1060 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", RFC 1061 2131, March 1997. 1063 [RFC2866] Rigney, C., "RADIUS Accounting", RFC 2866, June 2000. 1065 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1066 Resource Identifier (URI): Generic Syntax", STD 66, RFC 1067 3986, January 2005. 1069 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 1070 Network Configuration Protocol (NETCONF)", RFC 6020, 1071 October 2010. 1073 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and A. 1074 Bierman, "Network Configuration Protocol (NETCONF)", RFC 1075 6241, June 2011. 1077 [RFC6536] Bierman, A. and M. Bjorklund, "Network Configuration 1078 Protocol (NETCONF) Access Control Model", RFC 6536, March 1079 2012. 1081 9.2. Informative References 1083 [I-D.bierman-netconf-restconf] 1084 Bierman, A., Bjorklund, M., Watsen, K., and R. Fernando, 1085 "RESTCONF Protocol", draft-bierman-netconf-restconf-01 1086 (work in progress), September 2013. 1088 Appendix A. Example 1090 In the following example, we are assuming the use case of a network 1091 controller that wants to provide a controller network view to its 1092 client applications. This view needs to include network abstractions 1093 that are maintained by the controller itself, as well as certain 1094 information about network devices where the network abstractions tie 1095 in with element-specific information. For this purpose, the network 1096 controller leverages the mount capability specified in this document 1097 and presents a fictitious Controller Network YANG Module that is 1098 depicted in the outlined structure below. The example illustrates 1099 how mounted information is leveraged by the mounting datastore to 1100 provide an additional level of information that ties together network 1101 and device abstractions, which could not be provided otherwise 1102 without introducing a (redundant) model to replicate those device 1103 abstractions 1105 rw controller-network 1106 +-- rw topologies 1107 | +-- rw topology [topo-id] 1108 | +-- rw topo-id node-id 1109 | +-- rw nodes 1110 | | +-- rw node [node-id] 1111 | | +-- rw node-id node-id 1112 | | +-- rw supporting-ne network-element-ref 1113 | | +-- rw termination-points 1114 | | +-- rw term-point [tp-id] 1115 | | +-- tp-id tp-id 1116 | | +-- ifref mountedIfRef 1117 | +-- rw links 1118 | +-- rw link [link-id] 1119 | +-- rw link-id link-id 1120 | +-- rw source tp-ref 1121 | +-- rw dest tp-ref 1122 +-- rw network-elements 1123 +-- rw network-element [element-id] 1124 +-- rw element-id element-id 1125 +-- rw element-address 1126 | +-- ... 1127 +-- M interfaces 1129 The controller network model consists of the following key 1130 components: 1132 o A container with a list of topologies. A topology is a graph 1133 representation of a network at a particular layer, for example, an 1134 IS-IS topology, an overlay topology, or an Openflow topology. 1135 Specific topology types can be defined in their own separate YANG 1136 modules that augment the controller network model. Those 1137 augmentations are outside the scope of this example 1139 o An inventory of network elements, along with certain information 1140 that is mounted from each element. The information that is 1141 mounted in this case concerns interface configuration information. 1142 For this purpose, each list element that represents a network 1143 element contains a corresponding mountpoint. The mountpoint uses 1144 as its target the network element address information provided in 1145 the same list element 1147 o Each topology in turn contains a container with a list of nodes. 1148 A node is a network abstraction of a network device in the 1149 topology. A node is hosted on a network element, as indicated by 1150 a network-element leafref. This way, the "logical" and "physical" 1151 aspects of a node in the network are cleanly separated. 1153 o A node also contains a list of termination points that terminate 1154 links. A termination point is implemented on an interface. 1155 Therefore, it contains a leafref that references the corresponding 1156 interface configuration which is part of the mounted information 1157 of a network element. Again, the distinction between termination 1158 points and interfaces provides a clean separation between logical 1159 concepts at the network topology level and device-specific 1160 concepts that are instantiated at the level of a network element. 1161 Because the interface information is mounted from a different 1162 datastore and therefore occurs at a different level of the 1163 containment hierarchy than it would if it were not mounted, it is 1164 not possible to use the interface-ref type that is defined in YANG 1165 data model for interface management [] to allow the termination 1166 point refer to its supporting interface. For this reason, a new 1167 type definition "mountedIfRef" is introduced that allows to refer 1168 to interface information that is mounted and hence has a different 1169 path. 1171 o Finally, a topology also contains a container with a list of 1172 links. A link is a network abstraction that connects nodes via 1173 node termination points. In the example, directional point-to- 1174 point links are depicted in which one node termination point 1175 serves as source, another as destination. 1177 The following is a YANG snippet of the module definition which makes 1178 use of the mountpoint definition. 1180 1181 module controller-network { 1182 namespace "urn:cisco:params:xml:ns:yang:controller-network"; 1183 // example only, replace with IANA namespace when assigned 1184 prefix cn; 1185 import mount { 1186 prefix mnt; 1187 } 1188 import interfaces { 1189 prefix if; 1190 } 1191 ... 1192 typedef mountedIfRef { 1193 type leafref { 1194 path "/cn:controller-network/cn:network-elements/" 1195 +"cn:network-element/cn:interfaces/if:interface/if:name"; 1196 // cn:interfaces corresponds to the mountpoint 1197 } 1198 } 1199 ... 1200 list termination-point { 1201 key "tp-id"; 1202 ... 1203 leaf ifref { 1204 type mountedIfRef; 1205 } 1206 ... 1207 list network-element { 1208 key "element-id"; 1209 leaf element-id { 1210 type element-ID; 1211 } 1212 container element-address { 1213 ... // choice definition that allows to specify 1214 // host name, 1215 // IP addresses, URIs, etc 1216 } 1217 mnt:mountpoint "interfaces" { 1218 mnt:target "./element-address"; 1219 mnt:subtree "/if:interfaces"; 1220 } 1221 ... 1222 } 1223 ... 1224 1225 Finally, the following contains an XML snippet of instantiated YANG 1226 information. We assume three datastores: NE1 and NE2 each have a 1227 datastore (the mount targets) that contains interface configuration 1228 data, which is mounted into NC's datastore (the mount client). 1230 Interface information from NE1 datastore: 1232 1233 1234 fastethernet-1/0 1235 ethernetCsmacd 1236 1/0 1237 1238 1239 fastethernet-1/1 1240 ethernetCsmacd 1241 1/1 1242 1243 1245 Interface information from NE2 datastore: 1246 1247 1248 fastethernet-1/0 1249 ethernetCsmacd 1250 1/0 1251 1252 1253 fastethernet-1/2 1254 ethernetCsmacd 1255 1/2 1256 1257 1259 NC datastore with mounted interface information from NE1 and NE2: 1261 1262 ... 1263 1264 1265 NE1 1266 .... 1267 1268 1269 fastethernet-1/0 1270 ethernetCsmacd 1271 1/0 1273 1274 1275 fastethernet-1/1 1276 ethernetCsmacd 1277 1/1 1278 1279 1280 1281 1282 NE2 1283 .... 1284 1285 1286 fastethernet-1/0 1287 ethernetCsmacd 1288 1/0 1289 1290 1291 fastethernet-1/2 1292 ethernetCsmacd 1293 1/2 1294 1295 1296 1297 1298 ... 1299 1301 Authors' Addresses 1303 Alexander Clemm 1304 Cisco Systems 1306 EMail: alex@cisco.com 1308 Jan Medved 1309 Cisco Systems 1311 EMail: jmedved@cisco.com 1313 Eric Voit 1314 Cisco Systems 1316 EMail: evoit@cisco.com