idnits 2.17.1 draft-clemm-netmod-mount-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 410: '... A mountpoint MUST be contained unde...' RFC 2119 keyword, line 418: '...the mount target SHOULD refer to a con...' RFC 2119 keyword, line 420: '...spectively leaf-list SHOULD be mounted...' RFC 2119 keyword, line 424: '... mountpoints MUST NOT introduce circ...' RFC 2119 keyword, line 425: '...ounted datastore MUST NOT contain a mo...' (15 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 486 has weird spacing: '...oint-id strin...' == Line 489 has weird spacing: '...rget-ip yang:...' == Line 491 has weird spacing: '... rw uri yang:...' == Line 493 has weird spacing: '...ostname yang:...' == Line 495 has weird spacing: '...nfo-ref mnt:s...' == (4 more instances...) == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 21, 2013) is 4054 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 6536 (ref. '6') (Obsoleted by RFC 8341) == Outdated reference: A later version (-02) exists of draft-lhotka-netmod-yang-json-00 == Outdated reference: A later version (-16) exists of draft-ietf-netmod-interfaces-cfg-09 Summary: 3 errors (**), 0 flaws (~~), 10 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clemm 3 Internet-Draft J. Medved 4 Intended status: Experimental E. Voit 5 Expires: September 22, 2013 Cisco Systems 6 March 21, 2013 8 Mounting YANG-Defined Information from Remote Datastores 9 draft-clemm-netmod-mount-00 11 Abstract 13 This document introduces a new capability that allows YANG datastores 14 to reference and incorporate information from remote datastores. 15 This is accomplished using a new YANG data model that allows to 16 define and manage datastore mount points that reference data nodes in 17 remote datastores. The data model includes a set of YANG extensions 18 for the purposes of declaring such mount points. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on September 22, 2013. 37 Copyright Notice 39 Copyright (c) 2013 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 This document may contain material from IETF Documents or IETF 53 Contributions published or made publicly available before November 54 10, 2008. The person(s) controlling the copyright in some of this 55 material may not have granted the IETF Trust the right to allow 56 modifications of such material outside the IETF Standards Process. 57 Without obtaining an adequate license from the person(s) controlling 58 the copyright in such materials, this document may not be modified 59 outside the IETF Standards Process, and derivative works of it may 60 not be created outside the IETF Standards Process, except to format 61 it for publication as an RFC or to translate it into languages other 62 than English. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 67 2. Definitions and Acronyms . . . . . . . . . . . . . . . . . . . 5 68 3. Example scenarios . . . . . . . . . . . . . . . . . . . . . . 5 69 3.1. Network controller view . . . . . . . . . . . . . . . . . 6 70 3.2. Distributed network configuration . . . . . . . . . . . . 8 71 4. Data model structure . . . . . . . . . . . . . . . . . . . . . 9 72 4.1. YANG mountpoint extensions . . . . . . . . . . . . . . . . 9 73 4.2. Mountpoint management . . . . . . . . . . . . . . . . . . 10 74 4.3. YANG structure diagrams . . . . . . . . . . . . . . . . . 12 75 4.4. Other considerations . . . . . . . . . . . . . . . . . . . 12 76 4.4.1. Authorization . . . . . . . . . . . . . . . . . . . . 12 77 4.4.2. Datastore qualification . . . . . . . . . . . . . . . 13 78 4.4.3. Local mounting . . . . . . . . . . . . . . . . . . . . 13 79 4.4.4. Implementation considerations . . . . . . . . . . . . 14 80 5. Datastore mountpoint YANG module . . . . . . . . . . . . . . . 15 81 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 82 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 22 83 8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 22 84 8.1. Normative References . . . . . . . . . . . . . . . . . . . 22 85 8.2. Informative References . . . . . . . . . . . . . . . . . . 22 86 Appendix A. Example . . . . . . . . . . . . . . . . . . . . . . . 23 88 1. Introduction 90 This document introduces a new capability that allows YANG datastores 91 [1] to incorporate and reference information from remote datastores. 92 This is provided by introducing a mountpoint concept. This concept 93 allows to declare a YANG data node as a "mount point", under which a 94 remote datastore subtree can be mounted. To the user of the primary 95 datastore, the remote information appears as an integral part of the 96 datastore. The concept is reminiscent of analogous concepts in a 97 Network File System that allows to mount remote folders and make them 98 appear as if they were on folders on the local file system of the 99 user's machine. 101 The ability to mount information from remote datastores is new and 102 not covered by existing YANG mechanisms. Hitherto, management 103 information provided in a datastore was intrinsically tied to the 104 same server, whereas this ability allows the server to represent 105 information from remote systems as if it were its own. YANG does 106 provide means by which modules that have been separately defined can 107 reference and augment one another. YANG also does provide means to 108 specify data nodes that reference other data nodes. However, all the 109 data is assumed to be instantiated as part of the same datastore, for 110 example a datastore provided through a NETCONF server [2]. Existing 111 YANG mechanisms do not account for the possibility that some 112 information that needs to be referred not only resides in a different 113 subtree of the same datastore, or was defined in a separate module 114 that is also instantiated in the same datastore, but that is 115 genuinely part of a different datastore that is provided by a 116 different server. 118 The ability to mount data from remote datastores is useful to address 119 various problems that several categories of applications are faced 120 with: 122 One category of applications that can leverage this capability 123 concerns network controller applications that need to present a 124 consolidated view of management information in datastores across a 125 network. Controller applications are faced with the problem that in 126 order to expose information, that information needs to be part of 127 their own datastore. Today, this requires support of a corresponding 128 YANG data module. In order to expose information that concerns other 129 network elements, that information has to be replicated into the 130 controller's own datastore in the form of data nodes that may mirror 131 but are clearly distinct from corresponding data nodes in the network 132 element's datastore. In addition, in many cases, a controller needs 133 to impose its own hierarchy on the data that is different from the 134 one that was defined as part of the original module. An example for 135 this concerns interface configuration data, which would be contained 136 in a top-level container in a network element datastore, but may need 137 to be contained in a list in a controller datastore in order to be 138 able to distinguish instances from different network elements under 139 the controller's scope. This in turn would require introduction of 140 redundant YANG modules that effectively replicate the same 141 information save for differences in hierarchy. 143 By directly mounting information from network element datastores, the 144 controller does not need to replicate the same information from 145 multiple datastores, nor does it need to re-define any network 146 element and system-level abstractions to be able to put them in the 147 context of network abstractions. 149 A second category of applications concerns decentralized networking 150 applications that require globally consistent configuration of 151 parameters. When each network element maintains its own datastore 152 with the same configurable settings, a single global change requires 153 modifying the same information in many network elements across a 154 network. In case of inconsistent configurations, network failures 155 can result that are difficult to troubleshoot. In many cases, what 156 is more desirable is the ability to configure such settings in a 157 single place, then make them available to every network element. 158 Today, this requires in general the introduction of specialized 159 servers and configuration options outside the scope of NETCONF, such 160 as RADIUS [3] or DHCP [4]. In order to address this within the scope 161 of NETCONF and YANG, the same information would have to be 162 redundantly modeled and maintained, representing operational data 163 (mirroring some remote server) on some network elements and 164 configuration data on a designated master. Either way, additional 165 complexity ensues. 167 Instead of replicating the same global parameters across different 168 datastores, the solution presented in this document allows a single 169 copy to be maintained in a subtree of single datastore that is then 170 mounted by every network element that requires access to these 171 parameters. The global parameters can be hosted in a controller or a 172 designated network element. This considerably simplifies the 173 management of such parameters that need to be known across elements 174 in a network and require global consistency. 176 The capability of allowing to mount information from remote 177 datastores into another datastore is accomplished by a set of YANG 178 extensions that allow to define such mount points. For this purpose, 179 a new YANG module is introduced. The module defines the YANG 180 extensions, as well as a data model that can be used to manage the 181 mountpoints and mounting process itself. Only the mounting module 182 and server needs to be aware of the concepts introduced here. 183 Mounting is transparent to the models being mounted; any YANG model 184 can be mounted. 186 2. Definitions and Acronyms 188 Data node: An instance of management information in a YANG datastore. 190 DHCP: Dynamic Host Configuration Protocol. 192 Datastore: A conceptual store of instantiated management information, 193 with individual data items represented by data nodes which are 194 arranged in hierarchical manner. 196 Data subtree: An instantiated data node and the data nodes that are 197 hierarchically contained within it. 199 Mount client: The system at which the mount point resides, into which 200 the remote subtree is mounted. 202 Mount point: A data node that receives the root node of the remote 203 datastore being mounted. 205 Mount server: The server with which the mount client communicates and 206 which provides the mount client with access to the mounted 207 information. Can be used synonymously with mount target. 209 Mount target: A remote server whose datastore is being mounted. 211 NACM: NETCONF Access Control Model 213 NETCONF: Network Configuration Protocol 215 RADIUS: Remote Authentication Dial In User Service. 217 RPC: Remote Procedure Call 219 Remote datastore: A datastore residing at a remote node. 221 URI: Uniform Resource Identifier 223 YANG: A data definition language for NETCONF 225 3. Example scenarios 227 The following example scenarios outline some of the ways in which the 228 ability to mount YANG datastores can be applied. Other mount 229 topologies can be conceived in addition to the ones presented here. 231 3.1. Network controller view 233 Network controllers can use the mounting capability to present a 234 consolidated view of management information across the network. This 235 allows network controllers to not only expose network abstractions, 236 such as topologies or paths, but also network element abstractions, 237 such as information about a network element's interfaces, from one 238 consolidated place. 240 While an application on top of a controller could in theory also 241 bypass the controller to access network elements directly for 242 network-element abstractions, this would come at the expense of added 243 inconvenience for the client application. In addition, it would 244 compromise the ability to provide layered architectures in which 245 access to the network by controller applications is truly channeled 246 through the controller. 248 Without a mounting capability, a network controller would need to at 249 least conceptually replicate data from network elements to provide 250 such a view, incorporating network element information into its own 251 controller model that is separate from the network element's, 252 indicating that the information in the controller model is to be 253 populated from network elements. This can introduce issues such as 254 data consistency and staleness. Even more importantly, it would in 255 general lead to the redundant definition of data models: one model 256 that is implemented by the network element itself, and another model 257 to be implemented by the network controller. This leads to poor 258 maintainability, as analogous information has to be redundantly 259 defined and implemented across different data models. In general, 260 controllers cannot simply support the same modules as their network 261 elements for the same information because that information needs to 262 be put into a different context. This leads to "node"-information 263 that needs to be instantiated and indexed differently, because there 264 are multiple instances across different data stores. 266 For example, "system"-level information of a network element would 267 most naturally placed into a top-level container at that network 268 element's datastore. At the same time, the same information in the 269 context of the overall network, such as maintained by a controller, 270 might better be provided in a list. For example, the controller 271 might maintain a list with a list element for each network element, 272 underneath which the network element's system-level information is 273 contained. However, the containment structure of data nodes in a 274 module, once defined, cannot be changed. This means that in the 275 context of a network controller, a second module that repeats the 276 same system-level information would need to be defined, implemented, 277 and maintained. Any augmentations that add additional system-level 278 information to the original module will likewise need to be 279 redundantly defined, once for the "system" module, a second time for 280 the "controller" module. 282 By allowing a network controller to directly mount information from 283 network element datastores, the controller does not need to replicate 284 the same information from multiple datastores. Perhaps even more 285 importantly, the need to re-define any network element and system- 286 level abstractions to be able to put them in the context of network 287 abstractions is avoided. In this solution, a network controller's 288 datastore mounts information from many network element datastores. 289 For example, the network controller datastore could implement a list 290 in which each list element contains a mountpoint. Each mountpoint 291 mounts a subtree from a different network element's datastore. 293 This scenario is depicted in Figure 1. In the figure, M1 is the 294 mountpoint for the datastore in Network Element 1 and M2 is the 295 mountpoint for the datastore in Network Element 2. MDN1 is the 296 mounted data node in Network Element 1, and MDN2 is the moutend data 297 node in Network Element 2. 299 +-------------+ 300 | Network | 301 | Controller | 302 | Datastore | 303 | | 304 | +--N10 | 305 | +--N11 | 306 | +--N12 | 307 | +--M1******************************* 308 | +--M2****** * 309 | | * * 310 +-------------+ * * 311 * +---------------+ * +---------------+ 312 * | +--N1 | * | +--N5 | 313 * | +--N2 | * | +--N6 | 314 ********> +--MDN2 | *********> +--MDN1 | 315 | +--N3 | | +--N7 | 316 | +--N4 | | +--N8 | 317 | | | | 318 | Network | | Network | 319 | Element | | Element | 320 | Datastore | | Datastore | 321 +---------------+ +---------------+ 323 Figure 1: Network controller mount topology 325 3.2. Distributed network configuration 327 A second category of applications concerns decentralized networking 328 applications that require globally consistent configuration of 329 parameters that need to be known across elements in a network. 330 Today, the configuration of such parameters is generally performed on 331 a per network element basis, which is not only redundant but, more 332 importantly, error-prone. Inconsistent configurations lead to 333 erroneous network behavior that can be challenging to troubleshoot. 335 Using the ability to mount information from remote datastores opens 336 up a new possibility for managing such settings. Instead of 337 replicating the same global parameters across different datastores, a 338 single copy is maintained in a subtree of single datastore. This 339 datastore can hosted in a controller or a designated network element. 340 The subtree is subsequently mounted by every network element that 341 requires access to these parameters. 343 In many ways, this category of applications is an inverse of the 344 previous category: Whereas in the network controller case data from 345 many different datastores would be mounted into the same datastore 346 with multiple mountpoints, in this case many elements, each with 347 their own datastore, mount the same remote datastore, which is then 348 mounted by many different systems. 350 The scenario is depicted in Figure 2. In the figure, M1 is the 351 mountpoint for the Network Controller datastore in Network Element 1 352 and M2 is the mountpoint for the Network Controller datastore in 353 Network Element 2. MDN is the mounted data node in the Network 354 Controller datastore that contains the data nodes that represent the 355 shared configuration settings. 357 +---------------+ +---------------+ 358 | Network | | Network | 359 | Element | | Element | 360 | Datastore | | Datastore | 361 | | | | 362 | +--N1 | | +--N5 | 363 | | +--N2 | | | +--N6 | 364 | | +--N2 | | | +--N6 | 365 | | +--N3 | | | +--N7 | 366 | | +--N4 | | | +--N8 | 367 | | | | | | 368 | +--M1 | | +--M2 | 369 +-----*---------+ +-----*---------+ 370 * * +---------------+ 371 * * | | 372 * * | +--N10 | 373 * * | +--N11 | 374 *********************************************> +--MDN | 375 | +--N20 | 376 | +--N21 | 377 | ... | 378 | +--N22 | 379 | | 380 | Network | 381 | Controller | 382 | Datastore | 383 +---------------+ 385 Figure 2: Distributed config settings topology 387 4. Data model structure 389 4.1. YANG mountpoint extensions 391 At the center of the module is a set of YANG extensions that allow to 392 define a mountpoint. 394 o The first extension, "mountpoint", is used to declare a 395 mountpoint. The extension takes the name of the mountpoint as an 396 argument. 398 o The second extension, "target", serves as a substatement 399 underneath a mountpoint statement. It takes an argument that 400 identifies the target system. The argument is a reference to a 401 data node that contains the information that is needed to identify 402 and address a remote server, such as an IP address, a host name, 403 or a URI [5]. 405 o The third extension, "subtree", also serves as substatement 406 underneath a mountpoint statement. It takes an argument that 407 defines the root node of the datastore subtree that is to be 408 mounted, specified as string that contains a path expression. 410 A mountpoint MUST be contained underneath a container. Future 411 revisions might allow for mountpoints to be contained underneath 412 other data nodes, such as lists, leaf-lists, and cases. However, to 413 keep things simple, at this point mounting is only allowed directly 414 underneath a container. 416 Only a single data node can be mounted at one time. While the mount 417 target could refer to any data node, it is recommended that as a best 418 practice, the mount target SHOULD refer to a container. Likewise, to 419 mount lists or leaf-lists, a container containing the list 420 respectively leaf-list SHOULD be mounted 422 It is possible for a mounted datastore to contain another mountpoint, 423 thus leading to several levels of mount indirections. However, 424 mountpoints MUST NOT introduce circular dependencies. In particular, 425 a mounted datastore MUST NOT contain a mountpoint which specifies the 426 mounting datastore as a target and a subtree which contains as root 427 node a data node that in turn contains the original mountpoint. 428 Whenever a mount operation is performed, this condition MUST be 429 validated by the mount client. 431 4.2. Mountpoint management 433 The YANG module contains facilities to manage the mountpoints 434 themselves. 436 For this purpose, a list of the mountpoints is introduced. Each list 437 element represents a single mountpoint. It includes an 438 identification of the mount target, i.e. the remote system hosting 439 the remote datastore and a definition of the subtree of the remote 440 data node being mounted. It also includes monitoring information 441 about current status (indicating whether the mount has been 442 successful and is operational, or whether an error condition applies 443 such as the target being unreachable or referring to an invalid 444 subtree). 446 In addition to the list of mountpoints, a set of global mount policy 447 settings allows to set parameters such as mount retries and timeouts. 449 Each mountpoint list element also contains a set of the same 450 configuration knobs, allowing administrators to override global mount 451 policies and configure mount policies on a per-mountpoint basis if 452 needed. 454 There are two ways how mounting occurs: automatic (dynamically 455 performed as part of system operation) or manually (administered by a 456 user or client application). A separate mountpoint-origin object is 457 used to distinguish between manually configured and automatically 458 populated mountpoints. 460 When configured automatically, mountpoint information is 461 automatically populated by the datastore that implements the 462 mountpoint. The precise mechanisms for discovering mount targets and 463 bootstrapping mount points are provided by the mount client 464 infrastructure and outside the scope of this specification. 465 Likewise, when a mountpoint should be deleted and when it should 466 merely have its mount-status indicate that the target is unreachable 467 is a system-specific implementation decision. 469 Manual mounting consists of two steps. In a first step, a mountpoint 470 is manually configured by a user or client application through 471 administrative action. Once a mountpoint has been configured, actual 472 mounting occurs through an RPCs that is defined specifically for that 473 purpose. To unmount, a separate RPC is invoked; mountpoint 474 configuration information needs to be explicitly deleted. 476 The structure of the mountpoint management data model is depicted in 477 the following figure, where brackets enclose list keys, "rw" means 478 configuration, "ro" operational state data, and "?" designates 479 optional nodes. Parantheses enclose choice and case nodes. The 480 figure does not depict all definitions; it is intended to illustrate 481 the overall structure. 483 rw mount-server-mgmt 484 +-- rw mountpoints 485 | +-- rw mountpoint [mountpoint-id] 486 | +-- rw mountpoint-id string 487 | +-- rw mount-target 488 | | +--: (IP) 489 | | | +-- rw target-ip yang:ip-address 490 | | +--: (URI) 491 | | | +-- rw uri yang:uri 492 | | +--: (host-name) 493 | | | +-- rw hostname yang:host 494 | | +-- (node-ID) 495 | | | +-- rw node-info-ref mnt:subtree-ref 496 | | +-- (other) 497 | | +-- rw opaque-target-id string 498 | +-- rw subtree-ref mnt:subtree-ref 499 | +-- ro mountpoint-origin enumeration 500 | +-- ro mount-status mnt:mount-status 501 | +-- rw manual-mount? empty 502 | +-- rw retry-timer? uint16 503 | +-- rw number-of-retries? uint8 504 +-- rw global-mount-policies 505 +-- rw manual-mount? empty 506 +-- rw retry-time? uint16 507 +-- rw number-of-retries? uint8 509 4.3. YANG structure diagrams 511 YANG data model structure overviews have proven very useful to convey 512 the "Big Picture". It would be useful to indicate in YANG data model 513 structure overviews the fact that a given data node serves as a 514 mountpoint. We propose for this purpose also a corresponding 515 extension to the structure representation convention. Specifically, 516 we propose to prefix the name of the mounting data node with upper- 517 case 'M'. 519 rw network 520 +-- rw nodes 521 +-- rw node [node-ID] 522 +-- rw node-ID 523 +-- M node-system-info 525 4.4. Other considerations 527 4.4.1. Authorization 529 Whether a mount client is allowed to modify information in a mounted 530 datastore or only retrieve it and whether there are certain data 531 nodes or subtrees within the mounted information for which access is 532 restricted is subject to authorization rules. To the mounted system, 533 a mounting client will in general appear like any other client. 534 Authorization privileges for remote mounting clients need to be 535 specified through NACM (NETCONF Access Control Model) [6]. 537 Users and implementers need to be aware of certain issues when 538 mounted information is modified, not just retrieved. Specifically, 539 in certain corner cases validation of changes made to mounted data 540 may involve constraints that involve information that is not visible 541 to the mounting datastore. This means that in such cases the reason 542 for validation failures may not always be fully understood by the 543 mounting system. 545 Likewise, if the concepts of transactions and locking are applied at 546 the mounting system, these concepts will need to be applied across 547 multiple systems, not just across multiple data nodes within the same 548 system. This capability may not be supported by every 549 implementation. For example, locking a datastore that contains a 550 mountpoint requires that the mount client obtains corresponding locks 551 on the mounted datastore as needed. Any request to acquire a lock on 552 a configuration subtree that includes a mountpoint MUST NOT be 553 granted if the mount client fails to obtain a corresponding lock on 554 the mounted system. Likewise, in case transactions are supported by 555 the mounting system, but not the target system, requests to acquire a 556 lock on a configuration subtree that includes a mountpoint MUST NOT 557 be granted. 559 4.4.2. Datastore qualification 561 It is conceivable to differentiate between different datastores on 562 the remote server, that is, to designate the name of the actual 563 datastore to mount, e.g. "running" or "startup". However, for the 564 purposes of this spec, we assume that the datastore to be mounted is 565 generally implied. Mounted information is treated as analogous to 566 operational data; in general, this means the running or "effective" 567 datastore is the target. That said, the information which targets to 568 mount does constitute configuration and can hence be part of a 569 startup or candidate datastore. 571 4.4.3. Local mounting 573 It is conceivable that the mount target does not reside in a remote 574 datastore, but that data nodes in the same datastore as the 575 mountpoint are targeted for mounting. This amounts to introducing an 576 "aliasing" capability in a datastore. While this is not the scenario 577 that is primarily targeted, it is supported and there may be valid 578 use cases for it. 580 4.4.4. Implementation considerations 582 Implementation specifics are outside the scope of this specification. 583 That said, the following considerations apply: 585 Systems that wish to mount information from remote datastores need to 586 implement a mount client. The mount client communicates with a 587 remote system to access the remote datastore. To do so, there are 588 several options: 590 o The mount client acts as a NETCONF client to a remote system. 591 Alternatively, another interface to the remote system can be used, 592 such as a REST API using JSON encodings, as specified in [7] and 593 [8]. Either way, to the remote system, the mount client 594 constitutes essentailly a client application like any other. The 595 mount client in effect IS a special kind of client application. 597 o The mount client communicates with a remote mount server through a 598 separate protocol. The mount server is deployed on the same 599 system as the remote NETCONF datastore and interacts with it 600 through a set of local APIs. 602 o The mount client communicates with a remote mount server that acts 603 as a NETCONF client proxy to a remote system, on the client's 604 behalf. The communication between mount client and remote mount 605 server might involve a separate protocol, which is translated into 606 NETCONF operations by the remote mount server. 608 It is the responsibility of the mount client to manage the 609 association with the target system, e.g. validate it is still 610 reachable by maintaining a permanent association, perform 611 reachability checks in case of a connectionless transport, etc. 613 It is the responsibility of the mount client to manage the 614 mountpoints. This means that the mount client needs to populate the 615 mountpoint monitoring information (e.g. keep mount-status up to data 616 and determine in the case of automatic mounting when to add and 617 remove mountpoint configuration). In the case of automatic mounting, 618 the mount client also interacts with the mountpoint discovery and 619 bootstrap process. 621 The mount client needs to also participate in servicing datastore 622 operations involving mounted information. An operation requested 623 involving a mountpoint is relayed by the mounting system's 624 infrastructure to the mount client. For example, a request to 625 retrieve information from a datastore leads to an invocation of an 626 internal mount client API when a mount point is reached. The mount 627 client then relays a corresponding operation to the remote datastore. 629 It subsequently relays the result along with any responses back to 630 the invoking infrastructure, which then merges the result (e.g. a 631 retrieved subtree with the rest of the information that was 632 retrieved) as needed. Relaying the result may involve the need to 633 transpose error response codes in certain corner cases, e.g. when 634 mounted information could not be reached due to loss of connectivity 635 with the remote server, or when a configuration request failed due to 636 validation error. 638 5. Datastore mountpoint YANG module 640 641 file "mount@2013-03-21.yang" 642 module mount { 643 namespace "urn:cisco:params:xml:ns:yang:mount"; 644 // replace with IANA namespace when assigned 646 prefix mnt; 648 import ietf-yang-types { 649 prefix yang; 650 } 652 organization 653 "IETF NETMOD (NETCONF Data Modeling Language) Working Group"; 655 contact 656 "WG Web: http://tools.ietf.org/wg/netmod/ 657 WG List: netmod@ietf.org 659 WG Chair: David Kessens 660 david.kessens@nsn.com 662 WG Chair: Juergen Schoenwaelder 663 j.schoenwaelder@jacobs-university.de 665 Editor: Alexander Clemm 666 alex@cisco.com"; 668 description 669 "This module provides a set of YANG extensions and definitions 670 that can be used to mount information from remote datastores."; 672 revision 2013-03-21 { 673 description "Initial revision."; 674 } 675 feature mount-server-mgmt { 676 description 677 "Provide additional capabilities to manage remote mount 678 points"; 679 } 681 extension mountpoint { 682 description 683 "This YANG extension is used to mount data from a remote 684 system in place of the node under which this YANG extension 685 statement is used. 687 This extension takes one argument which specifies the name 688 of the mountpoint. 690 This extension can occur as a substatement underneath a 691 container statement, a list statement, or a case statement. 692 As a best practice, it SHOULD occur as statement only 693 underneath a container statement, but it MAY also occur 694 underneath a list or a case statement. 696 The extension takes two parameters, target and subtree, each 697 defined as their own YANG extensions. 698 A mountpoint statement MUST contain a target and a subtree 699 substatement for the mountpoint definition to be valid. 701 The target system MAY be specified in terms of a data node 702 that uses the grouping 'mnt:mount-target'. However, it 703 can be specified also in terms of any other data node that 704 contains sufficient information to address the mount target, 705 such as an IP address, a host name, or a URI. 707 The subtree SHOULD be specified in terms of a data node of 708 type 'mnt:subtree-ref'. The targeted data node MUST 709 represent a container. 711 It is possible for the mounted subtree to in turn contain a 712 mountpoint. However, circular mount relationships MUST NOT 713 be introduced. For this reason, a mounted subtree MUST NOT 714 contain a mountpoint that refers back to the mounting system 715 with a mount target that directly or indirectly contains the 716 originating mountpoint."; 718 argument "name"; 719 } 721 extension target { 722 description 723 "This YANG extension is used to specify a remote target 724 system from which to mount a datastore subtree. This YANG 725 extension takes one argument which specifies the remote 726 system. In general, this argument will contain the name of 727 a data node that contains the remote system information. It 728 is recommended that the reference data node uses the 729 mount-target grouping that is defined further below in this 730 module. 732 This YANG extension can occur only as a substatement below 733 a mountpoint statement. It MUST NOT occur as a substatement 734 below any other YANG statement."; 736 f argument "target-name"; 737 } 739 extension subtree { 740 description 741 "This YANG extension is used to specify a subtree in a 742 datastore that is to be mounted. This YANG extension takes 743 one argument which specifies the path to the root of the 744 subtree. The root of the subtree SHOULD represent an 745 instance of a YANG container. However, it MAY represent 746 also another data node. 748 This YANG extension can occur only as a substatement below 749 a mountpoint statement. It MUST NOT occur as a substatement 750 below any other YANG statement."; 752 argument "subtree-path"; 753 } 755 typedef mount-status { 756 description 757 "This type is used to represent the status of a 758 mountpoint."; 759 type enumeration { 760 enum ok; { 761 description 762 "Mounted"; 763 } 764 enum no-target { 765 description 766 "The argument of the mountpoint does not define a 767 target system"; 768 } 769 enum no-subtree { 770 description 771 "The argument of the mountpoint does not define a 772 root of a subtree"; 773 } 774 enum target-unreachable { 775 description 776 "The specified target system is currently 777 unreachable"; 778 } 779 enum mount-failure { 780 description 781 "Any other mount failure"; 782 } 783 enum unmounted { 784 description 785 "The specified mountpoint has been unmounted as the 786 result of a management operation"; 787 } 788 } 789 } 790 typedef subtree-ref { 791 type string; // string pattern to be defined 792 description 793 "This string specifies a path to a datanode. It corresponds 794 to the path substatement of a leafref type statement. Its 795 syntax needs to conform to the corresponding subset of the 796 XPath abbreviated syntax. Contrary to a leafref type, 797 subtree-ref allows to refer to a node in a remote datastore. 798 Also, a subtree-ref refers only to a single node, not a list 799 of nodes."; 800 } 801 rpc mount { 802 description 803 "This RPC allows an application or administrative user to 804 perform a mount operation. If successful, it will result in 805 the creation of a new mountpoint."; 806 input { 807 leaf mountpoint-id { 808 type string { 809 length "1..32"; 810 } 811 } 812 } 813 output { 814 leaf mount-status { 815 type mount-status; 816 } 817 } 818 } 819 rpc unmount { 820 "This RPC allows an application or administrative user to 821 unmount information from a remote datastore. If successful, 822 the corresponding mountpoint will be removed from the 823 datastore."; 824 input { 825 leaf mountpoint-id { 826 type string { 827 length "1..32"; 828 } 829 } 830 } 831 output { 832 leaf mount-status { 833 type mount-status; 834 } 835 } 836 } 837 grouping mount-monitor { 838 leaf mount-status { 839 description 840 "Indicates whether a mountpoint has been successfully 841 mounted or whether some kind of fault condition is 842 present."; 843 type mount-status; 844 config false; 845 } 846 } 847 grouping mount-target { 848 description 849 "This grouping contains data nodes that can be used to 850 identify a remote system from which to mount a datastore 851 subtree."; 852 container mount-target { 853 choice target-address-type { 854 mandatory; 855 case IP { 856 leaf target-ip { 857 type yang:ip-address; 858 } 859 case URI { 860 leaf uri { 861 type yang:uri; 862 } 863 } 864 case host-name { 865 leaf hostname { 866 type yang:host; 868 } 869 } 870 case node-ID { 871 leaf node-info-ref { 872 type subtree-ref; 873 } 874 } 875 case other { 876 leaf opaque-target-ID { 877 type string; 878 description 879 "Catch-all; could be used also for mounting 880 of data nodes that are local."; 881 } 882 } 883 } 884 } 885 } 886 grouping mount-policies { 887 description 888 "This grouping contains data nodes that allow to configure 889 policies associated with mountpoints."; 890 leaf manual-mount { 891 type empty; 892 description 893 "When present, a specified mountpoint is not 894 automatically mounted when the mount data node is 895 created, but needs to mounted via specific RPC 896 invocation."; 897 } 898 leaf retry-timer { 899 type uint16; 900 units "seconds"; 901 description 902 "When specified, provides the period after which 903 mounting will be automatically reattempted in case of a 904 mount status of an unreachable target"; 905 } 906 leaf number-of-retries { 907 type uint8; 908 description 909 "When specified, provides a limit for the number of 910 times for which retries will be automatically 911 attempted"; 912 } 913 } 915 container mount-server-mgmt { 916 if-feature mount-server-mgmt; 917 container mountpoints { 918 list mountpoint { 919 key "mountpoint-id"; 921 leaf mountpoint-id { 922 type string { 923 length "1..32"; 924 } 925 } 926 leaf mountpoint-origin { 927 type enumeration { 928 enum client { 929 description 930 "Mountpoint has been supplied and is 931 manually administered by a client"; 932 } 933 enum auto { 934 description 935 "Mountpoint is automatically 936 administered by the server"; 937 } 938 config false; 939 } 940 } 941 uses mount-target; 942 leaf subtree-ref { 943 type subtree-ref; 944 mandatory; 945 } 946 uses mount-monitor; 947 uses mount-policies; 948 } 949 } 950 container global-mount-policies { 951 uses mount-policies; 952 description 953 "Provides mount policies applicable for all mountpoints, 954 unless overridden for a specific mountpoint."; 955 } 956 } 957 } 958 959 6. Security Considerations 961 TBD 963 7. Acknowledgements 965 We wish to acknowledge the helpful contributions, comments, and 966 suggestions that were received from Tony Tkacik, Robert Varga, Lukas 967 Sedlak, and Benoit Claise. 969 8. References 971 8.1. Normative References 973 [1] Bjorklund, M., "YANG - A Data Modeling Language for the Network 974 Configuration Protocol (NETCONF)", RFC 6020, October 2010. 976 [2] Enns, R., Bjorklund, M., Schoenwaelder, J., and A. Bierman, 977 "Network Configuration Protocol (NETCONF)", RFC 6241, June 2011. 979 [3] Rigney, C., "RADIUS Accounting", RFC 2866, June 2000. 981 [4] Droms, R., "Dynamic Host Configuration Protocol", RFC 2131, 982 March 1997. 984 [5] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 985 Resource Identifier (URI): Generic Syntax", STD 66, RFC 3986, 986 January 2005. 988 [6] Bierman, A. and M. Bjorklund, "Network Configuration Protocol 989 (NETCONF) Access Control Model", RFC 6536, March 2012. 991 8.2. Informative References 993 [7] Bierman, A. and M. Bjorklund, "YANG-API Protocol", 994 draft-bierman-netconf-yang-api-01 (work in progress), 995 November 2012. 997 [8] Lhotka, L., "Modeling JSON Text with YANG", 998 draft-lhotka-netmod-yang-json-00 (work in progress), 999 October 2012. 1001 [9] Bjorklund, M., "A YANG Data Model for Interface Management", 1002 draft-ietf-netmod-interfaces-cfg-09 (work in progress), 1003 February 2013. 1005 Appendix A. Example 1007 In the following example, we are assuming the use case of a network 1008 controller that wants to provide a controller network view to its 1009 client applications. This view needs to include network abstractions 1010 that are maintained by the controller itself, as well as certain 1011 information about network devices where the network abstractions tie 1012 in with element-specific information. For this purpose, the network 1013 controller leverages the mount capability specified in this document 1014 and presents a fictitious Controller Network YANG Module that is 1015 depicted in the outlined structure below. The example illustrates 1016 how mounted information is leveraged by the mounting datastore to 1017 provide an additional level of information that ties together network 1018 and device abstractions, which could not be provided otherwise 1019 without introducing a (redundant) model to replicate those device 1020 abstractions 1022 rw controller-network 1023 +-- rw topologies 1024 | +-- rw topology [topo-id] 1025 | +-- rw topo-id node-id 1026 | +-- rw nodes 1027 | | +-- rw node [node-id] 1028 | | +-- rw node-id node-id 1029 | | +-- rw supporting-ne network-element-ref 1030 | | +-- rw termination-points 1031 | | +-- rw term-point [tp-id] 1032 | | +-- tp-id tp-id 1033 | | +-- ifref mountedIfRef 1034 | +-- rw links 1035 | +-- rw link [link-id] 1036 | +-- rw link-id link-id 1037 | +-- rw source tp-ref 1038 | +-- rw dest tp-ref 1039 +-- rw network-elements 1040 +-- rw network-element [element-id] 1041 +-- rw element-id element-id 1042 +-- rw element-address 1043 | +-- ... 1044 +-- M interfaces 1046 The controller network model consists of the following key 1047 components: 1049 o A container with a list of topologies. A topology is a graph 1050 representation of a network at a particular layer, for example, an 1051 IS-IS topology, an overlay topology, or an Openflow topology. 1052 Specific topology types can be defined in their own separate YANG 1053 modules that augment the controller network model. Those 1054 augmentations are outside the scope of this example 1056 o An inventory of network elements, along with certain information 1057 that is mounted from each element. The information that is 1058 mounted in this case concerns interface configuration information 1059 that is defined in the YANG interface module [9]. For this 1060 purpose, each list element that represents a network element 1061 contains a corresponding mountpoint. The mountpoint uses as its 1062 target the network element address information provided in the 1063 same list element 1065 o Each topology in turn contains a container with a list of nodes. 1066 A node is a network abstraction of a network device in the 1067 topology. A node is hosted on a network element, as indicated by 1068 a network-element leafref. This way, the "logical" and "physical" 1069 aspects of a node in the network are cleanly separated. 1071 o A node also contains a list of termination points that terminate 1072 links. A termination point is implemented on an interface. 1073 Therefore, it contains a leafref that references the corresponding 1074 interface configuration which is part of the mounted information 1075 of a network element. Again, the distinction between termination 1076 points and interfaces provides a clean separation between logical 1077 concepts at the network topology level and device-specific 1078 concepts that are instantiated at the level of a network element. 1079 Because the interface information is mounted from a different 1080 datastore and therefore occurs at a different level of the 1081 containment hierarchy than it would if it were not mounted, it is 1082 not possible to use the interface-ref type that is defined in YANG 1083 data model for interface management [] to allow the termination 1084 point refer to its supporting interface. For this reason, a new 1085 type definition "mountedIfRef" is introduced that allows to refer 1086 to interface information that is mounted and hence has a different 1087 path. 1089 o Finally, a topology also contains a container with a list of 1090 links. A link is a network abstraction that connects nodes via 1091 node termination points. In the example, directional point-to- 1092 point links are depicted in which one node termination point 1093 serves as source, another as destination. 1095 The following is a YANG snippet of the module definition which makes 1096 use of the mountpoint definition. 1098 1099 module controller-network { 1100 namespace "urn:cisco:params:xml:ns:yang:controller-network"; 1101 // example only, replace with IANA namespace when assigned 1102 prefix cn; 1103 import mount { 1104 prefix mnt; 1105 } 1106 import interfaces { 1107 prefix if; 1108 } 1109 ... 1110 typedef mountedIfRef { 1111 type leafref { 1112 path "/cn:controller-network/cn:network-elements/" 1113 +"cn:network-element/cn:interfaces/if:interface/if:name"; 1114 // cn:interfaces corresponds to the mountpoint 1115 } 1116 } 1117 ... 1118 list termination-point { 1119 key "tp-id"; 1120 ... 1121 leaf ifref { 1122 type mountedIfRef; 1123 } 1124 ... 1125 list network-element { 1126 key "element-id"; 1127 leaf element-id { 1128 type element-ID; 1129 } 1130 container element-address { 1131 ... // choice definition that allows to specify 1132 // host name, 1133 // IP addresses, URIs, etc 1134 } 1135 mnt:mountpoint "interfaces" { 1136 mnt:target "./element-address"; 1137 mnt:subtree "/if:interfaces"; 1138 } 1139 ... 1140 } 1141 ... 1142 1144 Finally, the following contains an XML snippet of instantiated YANG 1145 information. We assume three datastores: NE1 and NE2 each have a 1146 datastore (the mount targets) that contains interface configuration 1147 data, which is mounted into NC's datastore (the mount client). 1149 Interface information from NE1 datastore: 1151 1152 1153 fastethernet-1/0 1154 ethernetCsmacd 1155 1/0 1156 1157 1158 fastethernet-1/1 1159 ethernetCsmacd 1160 1/1 1161 1162 1164 Interface information from NE2 datastore: 1165 1166 1167 fastethernet-1/0 1168 ethernetCsmacd 1169 1/0 1170 1171 1172 fastethernet-1/2 1173 ethernetCsmacd 1174 1/2 1175 1176 1178 NC datastore with mounted interface information from NE1 and NE2: 1180 1181 ... 1182 1183 1184 NE1 1185 .... 1186 1187 1188 fastethernet-1/0 1189 ethernetCsmacd 1190 1/0 1191 1192 1193 fastethernet-1/1 1194 ethernetCsmacd 1195 1/1 1196 1197 1198 1199 1200 NE2 1201 .... 1202 1203 1204 fastethernet-1/0 1205 ethernetCsmacd 1206 1/0 1207 1208 1209 fastethernet-1/2 1210 ethernetCsmacd 1211 1/2 1212 1213 1214 1215 1216 ... 1217 1219 Authors' Addresses 1221 Alexander Clemm 1222 Cisco Systems 1224 EMail: alex@cisco.com 1225 Jan Medved 1226 Cisco Systems 1228 EMail: jmedved@cisco.com 1230 Eric Voit 1231 Cisco Systems 1233 EMail: evoit@cisco.com