idnits 2.17.1 draft-clemm-netmod-mount-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 588: '... A mountpoint MUST be contained unde...' RFC 2119 keyword, line 596: '...the mount target SHOULD refer to a con...' RFC 2119 keyword, line 602: '...tively leaf-list SHOULD be mounted ins...' RFC 2119 keyword, line 607: '... mountpoints MUST NOT introduce circ...' RFC 2119 keyword, line 608: '...ounted datastore MUST NOT contain a mo...' (19 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 702 has weird spacing: '...oint-id strin...' == Line 705 has weird spacing: '...rget-ip yang:...' == Line 707 has weird spacing: '... rw uri yang:...' == Line 709 has weird spacing: '...ostname yang:...' == Line 711 has weird spacing: '...nfo-ref mnt:s...' == (4 more instances...) == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (April 10, 2015) is 3305 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 3768 (Obsoleted by RFC 5798) ** Obsolete normative reference: RFC 6536 (Obsoleted by RFC 8341) ** Obsolete normative reference: RFC 7223 (Obsoleted by RFC 8343) == Outdated reference: A later version (-02) exists of draft-clemm-netconf-yang-push-00 == Outdated reference: A later version (-09) exists of draft-ietf-i2rs-pub-sub-requirements-02 -- No information found for draft-ietf-netconf-restonf - is the name correct? == Outdated reference: A later version (-03) exists of draft-voit-netmod-peer-mount-requirements-02 Summary: 5 errors (**), 0 flaws (~~), 11 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clemm 3 Internet-Draft J. Medved 4 Intended status: Experimental E. Voit 5 Expires: October 12, 2015 Cisco Systems 6 April 10, 2015 8 Mounting YANG-Defined Information from Remote Datastores 9 draft-clemm-netmod-mount-03.txt 11 Abstract 13 This document introduces capabilities that allow YANG datastores to 14 reference and incorporate information from remote datastores. This 15 is accomplished by extending YANG with the ability to define mount 16 points that act as references to data nodes in remote datastores, and 17 by providing the necessary means to manage and administer those mount 18 points. This facilitates the development of applications that need 19 to access data that transcends individual network devices while 20 improving network-wide object consistency. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on October 12, 2015. 39 Copyright Notice 41 Copyright (c) 2015 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 This document may contain material from IETF Documents or IETF 55 Contributions published or made publicly available before November 56 10, 2008. The person(s) controlling the copyright in some of this 57 material may not have granted the IETF Trust the right to allow 58 modifications of such material outside the IETF Standards Process. 59 Without obtaining an adequate license from the person(s) controlling 60 the copyright in such materials, this document may not be modified 61 outside the IETF Standards Process, and derivative works of it may 62 not be created outside the IETF Standards Process, except to format 63 it for publication as an RFC or to translate it into languages other 64 than English. 66 Table of Contents 68 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 69 1.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 3 70 1.2. Examples . . . . . . . . . . . . . . . . . . . . . . . . 4 71 2. Definitions and Acronyms . . . . . . . . . . . . . . . . . . 6 72 3. Example scenarios . . . . . . . . . . . . . . . . . . . . . . 7 73 3.1. Network controller view . . . . . . . . . . . . . . . . . 7 74 3.2. Consistent network configuration . . . . . . . . . . . . 9 75 4. Operating on mounted data . . . . . . . . . . . . . . . . . . 10 76 4.1. General principles . . . . . . . . . . . . . . . . . . . 11 77 4.2. Data retrieval . . . . . . . . . . . . . . . . . . . . . 11 78 4.3. Other operations . . . . . . . . . . . . . . . . . . . . 11 79 4.4. Other considerations . . . . . . . . . . . . . . . . . . 12 80 5. Data model structure . . . . . . . . . . . . . . . . . . . . 13 81 5.1. YANG mountpoint extensions . . . . . . . . . . . . . . . 13 82 5.2. YANG structure diagrams . . . . . . . . . . . . . . . . . 14 83 5.3. Mountpoint management . . . . . . . . . . . . . . . . . . 14 84 5.4. Caching . . . . . . . . . . . . . . . . . . . . . . . . . 16 85 5.5. Other considerations . . . . . . . . . . . . . . . . . . 17 86 5.5.1. Authorization . . . . . . . . . . . . . . . . . . . . 17 87 5.5.2. Datastore qualification . . . . . . . . . . . . . . . 17 88 5.5.3. Local mounting . . . . . . . . . . . . . . . . . . . 17 89 5.5.4. Mount cascades . . . . . . . . . . . . . . . . . . . 18 90 5.5.5. Implementation considerations . . . . . . . . . . . . 18 91 5.5.6. Modeling best practices . . . . . . . . . . . . . . . 19 92 6. Datastore mountpoint YANG module . . . . . . . . . . . . . . 19 93 7. Security Considerations . . . . . . . . . . . . . . . . . . . 27 94 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 27 95 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 28 96 9.1. Normative References . . . . . . . . . . . . . . . . . . 28 97 9.2. Informative References . . . . . . . . . . . . . . . . . 28 98 Appendix A. Example . . . . . . . . . . . . . . . . . . . . . . 30 99 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 34 101 1. Introduction 103 1.1. Overview 105 This document introduces a new capability that allows YANG datastores 106 [RFC6020] to incorporate and reference information from remote 107 datastores. This is provided by introducing a mountpoint concept. 108 This concept allows to declare a YANG data node in a primary 109 datastore to serve as a "mount point" under which a remote datastore 110 subtree can be mounted. This way, remote data nodes and datastore 111 subtrees can be inserted into the local data hierarchy, arranged 112 below local data nodes. To the user of the primary datastore, this 113 provides visibility to remote data, rendered in a way that makes it 114 appear largely as if it were an integral part of the datastore. This 115 enables users to retrieve local and as remote data in integrated 116 fashion, using e.g. Netconf [RFC6241] or Restconf 117 [I-D.ietf-netconf-restconf] data retrieval primitives. The concept 118 is reminiscent of concepts in a Network File System that allows to 119 mount remote folders and make them appear as if they were contained 120 in the local file system of the user's machine. 122 The mountpoint concept applies in principle to operations beyond data 123 retrieval, i.e. to configuration, RPCs, and notifications. However, 124 support for such operations involves additional considerations, for 125 example if support for configuration transactions and locking (which 126 might now apply across the network) were to be provided. While it is 127 conceivable that additional capabilities for operations on mounted 128 information are introduced at some point in time, their specification 129 is beyond the scope of this specification. 131 YANG does provide means by which modules that have been separately 132 defined can reference and augment one another. YANG also does 133 provide means to specify data nodes that reference other data nodes. 134 However, all the data is assumed to be instantiated as part of the 135 same datastore, for example a datastore provided through a NETCONF 136 server. Existing YANG mechanisms do not account for the possibility 137 that some information that needs to be referred not only resides in a 138 different subtree of the same datastore, or was defined in a separate 139 module that is also instantiated in the same datastore, but that is 140 genuinely part of a different datastore that is provided by a 141 different server. 143 The ability to mount information from remote datastores is new and 144 not covered by existing YANG mechanisms. Until now, management 145 information provided in a datastore has been intrinsically tied to 146 the same server. In contrast, the capability introduced in this 147 specification allows the server to represent information from remote 148 systems as if it were its own and contained in its own local data 149 hierarchy. 151 The capability of allowing the mounting of information from remote 152 datastores into another datastore is accomplished by a set of YANG 153 extensions that allow to define such mount points. For this purpose, 154 a new YANG module is introduced. The module defines the YANG 155 extensions, as well as a data model that can be used to manage the 156 mountpoints and mounting process itself. Only the mounting module 157 and its server (i.e. the "receivers" or "consumers" of the mounted 158 information) need to be aware of the concepts introduced here. 159 Mounting is transparent to the "providers" of the mounted information 160 and models that are being mounted; any data nodes or subtrees within 161 any YANG model can be mounted. 163 1.2. Examples 165 The requirements for mounting YANG subtrees from remote datastores, 166 as long as a set of associated use cases, are documented in 167 [I-D.voit-netmod-peer-mount-requirements]. The ability to mount data 168 from remote datastores is useful to address various problems that 169 several categories of applications are faced with. 171 One category of applications that can leverage this capability are 172 network controller applications that need to present a consolidated 173 view of management information in datastores across a network. 174 Controller applications are faced with the problem that in order to 175 expose information, that information needs to be part of their own 176 datastore. Today, this requires support of a corresponding YANG data 177 module. In order to expose information that concerns other network 178 elements, that information has to be replicated into the controller's 179 own datastore in the form of data nodes that may mirror but are 180 clearly distinct from corresponding data nodes in the network 181 element's datastore. In addition, in many cases, a controller needs 182 to impose its own hierarchy on the data that is different from the 183 one that was defined as part of the original module. An example for 184 this concerns interface data, both operational data (e.g. various 185 types of interface statistics) and configuration data, such as 186 defined in [RFC7223]. This data will be contained in a top-level 187 container ("interfaces", in this particular case) in a network 188 element datastore. The controller may need to provide its clients a 189 view on interface data from multiple devices under its scope of 190 control. One way of to do so would involve organizing the data in a 191 list with separate list elements for each device. However, this in 192 turn would require introduction of redundant YANG modules that 193 effectively replicate the same interface data save for differences in 194 hierarchy. 196 By directly mounting information from network element datastores, the 197 controller does not need to replicate the same information from 198 multiple datastores, nor does it need to re-define any network 199 element and system-level abstractions to be able to put them in the 200 context of network abstractions. Instead, the subtree of the remote 201 system is attached to the local mount point. Operations that need to 202 access data below the mount point are in effect transparently 203 redirected to remote system, which is the authoritative owner of the 204 data. The mounting system does not even necessarily need to be aware 205 of the specific data in the remote subtree. Optionally, caching 206 strategies can be employed in which the mounting system prefetches 207 data. 209 A second category of applications concerns decentralized networking 210 applications that require globally consistent configuration of 211 parameters. When each network element maintains its own datastore 212 with the same configurable settings, a single global change requires 213 modifying the same information in many network elements across a 214 network. In case of inconsistent configurations, network failures 215 can result that are difficult to troubleshoot. In many cases, what 216 is more desirable is the ability to configure such settings in a 217 single place, then make them available to every network element. 218 Today, this requires in general the introduction of specialized 219 servers and configuration options outside the scope of NETCONF, such 220 as RADIUS [RFC2866] or DHCP [RFC2131]. In order to address this 221 within the scope of NETCONF and YANG, the same information would have 222 to be redundantly modeled and maintained, representing operational 223 data (mirroring some remote server) on some network elements and 224 configuration data on a designated master. Either way, additional 225 complexity ensues. 227 Instead of replicating the same global parameters across different 228 datastores, the solution presented in this document allows a single 229 copy to be maintained in a subtree of single datastore that is then 230 mounted by every network element that requires awareness of these 231 parameters. The global parameters can be hosted in a controller or a 232 designated network element. This considerably simplifies the 233 management of such parameters that need to be known across elements 234 in a network and require global consistency. 236 It should be noted that for these and many other applications merely 237 having a view of the remote information is sufficient. It allows to 238 define consolidated views of information without the need for 239 replicating data and models that have already been defined, to audit 240 information, and to validate consistency of configurations across a 241 network. Only retrieval operations are required; no operations that 242 involve configuring remote data are involved. 244 2. Definitions and Acronyms 246 Data node: An instance of management information in a YANG datastore. 248 DHCP: Dynamic Host Configuration Protocol. 250 Datastore: A conceptual store of instantiated management information, 251 with individual data items represented by data nodes which are 252 arranged in hierarchical manner. 254 Datastore-push: A mechanism that allows a client to subscribe to 255 updates from a datastore, which are then automatically pushed by the 256 server to the client. 258 Data subtree: An instantiated data node and the data nodes that are 259 hierarchically contained within it. 261 Mount client: The system at which the mount point resides, into which 262 the remote subtree is mounted. 264 Mount point: A data node that receives the root node of the remote 265 datastore being mounted. 267 Mount server: The server with which the mount client communicates and 268 which provides the mount client with access to the mounted 269 information. Can be used synonymously with mount target. 271 Mount target: A remote server whose datastore is being mounted. 273 NACM: NETCONF Access Control Model 275 NETCONF: Network Configuration Protocol 277 RADIUS: Remote Authentication Dial In User Service. 279 RPC: Remote Procedure Call 281 Remote datastore: A datastore residing at a remote node. 283 URI: Uniform Resource Identifier 285 YANG: A data definition language for NETCONF 287 3. Example scenarios 289 The following example scenarios outline some of the ways in which the 290 ability to mount YANG datastores can be applied. Other mount 291 topologies can be conceived in addition to the ones presented here. 293 3.1. Network controller view 295 Network controllers can use the mounting capability to present a 296 consolidated view of management information across the network. This 297 allows network controllers to expose network-wide abstractions, such 298 as topologies or paths, multi-device abstractions, such as VRRP 299 [RFC3768], and network-element specific abstractions, such as 300 information about a network element's interfaces. 302 While an application on top of a controller could bypass the 303 controller to access network elements directly for their element- 304 specific abstractions, this would come at the expense of added 305 inconvenience for the client application. In addition, it would 306 compromise the ability to provide layered architectures in which 307 access to the network by controller applications is truly channeled 308 through the controller. 310 Without a mounting capability, a network controller would need to at 311 least conceptually replicate data from network elements to provide 312 such a view, incorporating network element information into its own 313 controller model that is separate from the network element's, 314 indicating that the information in the controller model is to be 315 populated from network elements. This can introduce issues such as 316 data inconsistency and staleness. Equally important, it would lead 317 to the need to define redundant data models: one model that is 318 implemented by the network element itself, and another model to be 319 implemented by the network controller. This leads to poor 320 maintainability, as analogous information has to be redundantly 321 defined and implemented across different data models. In general, 322 controllers cannot simply support the same modules as their network 323 elements for the same information because that information needs to 324 be put into a different context. This leads to "node"-information 325 that needs to be instantiated and indexed differently, because there 326 are multiple instances across different data stores. 328 For example, "system"-level information of a network element would 329 most naturally placed into a top-level container at that network 330 element's datastore. At the same time, the same information in the 331 context of the overall network, such as maintained by a controller, 332 might better be provided in a list. For example, the controller 333 might maintain a list with a list element for each network element, 334 underneath which the network element's system-level information is 335 contained. However, the containment structure of data nodes in a 336 module, once defined, cannot be changed. This means that in the 337 context of a network controller, a second module that repeats the 338 same system-level information would need to be defined, implemented, 339 and maintained. Any augmentations that add additional system-level 340 information to the original module will likewise need to be 341 redundantly defined, once for the "system" module, a second time for 342 the "controller" module. 344 By allowing a network controller to directly mount information from 345 network element datastores, the controller does not need to replicate 346 the same information from multiple datastores. Perhaps even more 347 importantly, the need to re-define any network element and system- 348 level abstractions just to be able to put them in the context of 349 network abstractions is avoided. In this solution, a network 350 controller's datastore mounts information from many network element 351 datastores. For example, the network controller datastore (the 352 "primary" datastore) could implement a list in which each list 353 element contains a mountpoint. Each mountpoint mounts a subtree from 354 a different network element's datastore. The data from the mounted 355 subtrees is then accessible to clients of the primary datastore using 356 the usual data retrieval operations. 358 This scenario is depicted in Figure 1. In the figure, M1 is the 359 mountpoint for the datastore in Network Element 1 and M2 is the 360 mountpoint for the datastore in Network Element 2. MDN1 is the 361 mounted data node in Network Element 1, and MDN2 is the mounted data 362 node in Network Element 2. 364 +-------------+ 365 | Network | 366 | Controller | 367 | Datastore | 368 | | 369 | +--N10 | 370 | +--N11 | 371 | +--N12 | 372 | +--M1******************************* 373 | +--M2****** * 374 | | * * 375 +-------------+ * * 376 * +---------------+ * +---------------+ 377 * | +--N1 | * | +--N5 | 378 * | +--N2 | * | +--N6 | 379 ********> +--MDN2 | *********> +--MDN1 | 380 | +--N3 | | +--N7 | 381 | +--N4 | | +--N8 | 382 | | | | 383 | Network | | Network | 384 | Element | | Element | 385 | Datastore | | Datastore | 386 +---------------+ +---------------+ 388 Figure 1: Network controller mount topology 390 3.2. Consistent network configuration 392 A second category of applications concerns decentralized networking 393 applications that require globally consistent configuration of 394 parameters that need to be known across elements in a network. 395 Today, the configuration of such parameters is generally performed on 396 a per network element basis, which is not only redundant but, more 397 importantly, error-prone. Inconsistent configurations lead to 398 erroneous network behavior that can be challenging to troubleshoot. 400 Using the ability to mount information from remote datastores opens 401 up a new possibility for managing such settings. Instead of 402 replicating the same global parameters across different datastores, a 403 single copy is maintained in a subtree of single datastore. This 404 datastore can hosted in a controller or a designated network element. 405 The subtree is subsequently mounted by every network element that 406 requires access to these parameters. 408 In many ways, this category of applications is an inverse of the 409 previous category: Whereas in the network controller case data from 410 many different datastores would be mounted into the same datastore 411 with multiple mountpoints, in this case many elements, each with 412 their own datastore, mount the same remote datastore, which is then 413 mounted by many different systems. 415 The scenario is depicted in Figure 2. In the figure, M1 is the 416 mountpoint for the Network Controller datastore in Network Element 1 417 and M2 is the mountpoint for the Network Controller datastore in 418 Network Element 2. MDN is the mounted data node in the Network 419 Controller datastore that contains the data nodes that represent the 420 shared configuration settings. (Note that there is no reason why the 421 Network Controller Datastore in this figure could not simply reside 422 on a network element itself; the division of responsibilities is a 423 logical one. 425 +---------------+ +---------------+ 426 | Network | | Network | 427 | Element | | Element | 428 | Datastore | | Datastore | 429 | | | | 430 | +--N1 | | +--N5 | 431 | | +--N2 | | | +--N6 | 432 | | +--N2 | | | +--N6 | 433 | | +--N3 | | | +--N7 | 434 | | +--N4 | | | +--N8 | 435 | | | | | | 436 | +--M1 | | +--M2 | 437 +-----*---------+ +-----*---------+ 438 * * +---------------+ 439 * * | | 440 * * | +--N10 | 441 * * | +--N11 | 442 *********************************************> +--MDN | 443 | +--N20 | 444 | +--N21 | 445 | ... | 446 | +--N22 | 447 | | 448 | Network | 449 | Controller | 450 | Datastore | 451 +---------------+ 453 Figure 2: Distributed config settings topology 455 4. Operating on mounted data 457 This section provides a rough illustration of the operations flow 458 involving mounted datastores. 460 4.1. General principles 462 The first thing that should be noted about these operations flows 463 concerns the fact that a mount client essentially constitutes a 464 special management application that interacts with a remote system. 465 To the remote system, the mount client constitutes in effect just 466 another application. The remote system is the authoritative owner of 467 the data. While it is conceivable that the remote system (or an 468 application that proxies for the remote system) provides certain 469 functionality to facilitate the specific needs of the mount client to 470 make it more efficient, the fact that another system decides to 471 expose a certain "view" of that data is fundamentally not the remote 472 system's concern. 474 When a client application makes a request to a server that involves 475 data that is mounted from a remote system, the server will 476 effectively act as a proxy to the remote system on the client 477 application's behalf. It will extract from the client application 478 request the portion that involves the mounted subtree from the remote 479 system. It will strip that portion of the local context, i.e. remove 480 any local data paths and insert the data path of the mounted remote 481 subtree, as appropriate. The server will then forward the transposed 482 request to the remote system that is the authoritative owner of the 483 mounted data, acting itself as a client to the remote server. Upon 484 receiving the reply, the server will transpose the results into the 485 local context as needed, for example map the data paths into the 486 local data tree structure, and combine those results with the results 487 of the remainder portion of the original request. 489 4.2. Data retrieval 491 Data retrieval operations are the only category of operations that is 492 supported for peer-mounted information. In that case, a Netconf 493 "get" or "get-configuration" operation might be applied on a subtree 494 whose scope includes a mount point. When resolving the mount point, 495 the server issues its own "get" or "get-configuration" request 496 against the remote system's subtree that is attached to the mount 497 point. The returned information is then inserted into the data 498 structure that is in turn returned to the client that originally 499 invoked the request. 501 4.3. Other operations 503 The fact that only data retrieval operations are the only category of 504 operations that are supported for peer-mounted information does not 505 preclude other operations to be applied to datastore subtrees that 506 contain mountpoints and peer-mounted information. Peer-mounted 507 information is simply transparent to those operations. When an 508 operation is applied to a subtree which includes mountpoints, mounted 509 information is ignored for purposes of the operation. For example, 510 for a Netconf "edit-config" operation that includes a subtree with a 511 mountpoint, a server will ignore the data under the mountpoint and 512 apply the operation only to the local configuration. Mounted data is 513 "read-only" data. The server does not even need to return an error 514 message that the operation could not be applied to mounted data; the 515 mountpoint is simply ignored. 517 In principle, it is conceivable that operations other than data- 518 retrieval are applied to mounted data as well. For example, an 519 operation to edit configuration information might expect edits to be 520 applied to remote systems as part of the operation, where the edited 521 subtree involves mounted information. However, editing of 522 information and "writing through" to remote systems potentially 523 involves significant complexity, particularly if transactions and 524 locking across multiple configuration items are involved. Support 525 for such operations will require additional capabilities, 526 specification of which is beyond the scope of this specification. 528 Likewise, YANG-Mount does not extend towards RPCs that are defined as 529 part of YANG modules whose contents is being mounted. Support for 530 RPCs that involve mounted portions of the datastore, while 531 conceivable, would require introduction of an additional capability, 532 whose definition is outside the scope of this specification. 534 By the same token, YANG-Mount does not extend towards notifications. 535 It is conceivable to offer such support in the future using a 536 separate capability, definition of which is once again outside the 537 scope of this specification. 539 4.4. Other considerations 541 Since mounting of information typically involves communication with a 542 remote system, there is a possibility that the remote system will not 543 respond within a certain amount of time, that connectivity is lost, 544 or that other errors occur. Accordingly, the ability to mount 545 datastores also involves mountpoint management, which includes the 546 ability to configure timeouts, retries, and management of mountpoint 547 state (including dynamic addition removal of mountpoints). 548 Mountpoint management will be discussed in section Section 5.3. 550 It is expected that some implementations will introduce caching 551 schemes. Caching can increase performance and efficiency in certain 552 scenarios (for example, in the case of data that is frequently read 553 but that rarely changes), but increases implementation complexity. 554 Caching is not required for YANG-mount to work - in which case access 555 to mounted information is "on-demand", in which the authoritative 556 data node always gets accessed. Whether to perform caching is a 557 local implementation decision. 559 When caching is introduced, it can benefit from the ability to 560 subscribe to updates on remote data by remote servers. Requirements 561 for such a capability have been defined in 562 [I-D.ietf-i2rs-pub-sub-requirements]. Some optimizations to 563 facilitate caching support will be discussed in section Section 5.4. 565 5. Data model structure 567 5.1. YANG mountpoint extensions 569 At the center of the module is a set of YANG extensions that allow to 570 define a mountpoint. 572 o The first extension, "mountpoint", is used to declare a 573 mountpoint. The extension takes the name of the mountpoint as an 574 argument. 576 o The second extension, "target", serves as a substatement 577 underneath a mountpoint statement. It takes an argument that 578 identifies the target system. The argument is a reference to a 579 data node that contains the information that is needed to identify 580 and address a remote server, such as an IP address, a host name, 581 or a URI [RFC3986]. 583 o The third extension, "subtree", also serves as substatement 584 underneath a mountpoint statement. It takes an argument that 585 defines the root node of the datastore subtree that is to be 586 mounted, specified as string that contains a path expression. 588 A mountpoint MUST be contained underneath a container. Future 589 revisions might allow for mountpoints to be contained underneath 590 other data nodes, such as lists, leaf-lists, and cases. However, to 591 keep things simple, at this point mounting is only allowed directly 592 underneath a container. 594 Only a single data node can be mounted at one time. While the mount 595 target could refer to any data node, it is recommended that as a best 596 practice, the mount target SHOULD refer to a container. It is 597 possible to maintain e.g. a list of mount points, with each mount 598 point each of which has a mount target an element of a remote list. 599 However, to avoid unnecessary proliferation of the number of mount 600 points and associated management overhead, when data from lists or 601 leaf-lists is to be mounted, a container containing the list 602 respectively leaf-list SHOULD be mounted instead of individual list 603 elements. 605 It is possible for a mounted datastore to contain another mountpoint, 606 thus leading to several levels of mount indirections. However, 607 mountpoints MUST NOT introduce circular dependencies. In particular, 608 a mounted datastore MUST NOT contain a mountpoint which specifies the 609 mounting datastore as a target and a subtree which contains as root 610 node a data node that in turn contains the original mountpoint. 611 Whenever a mount operation is performed, this condition mountpoint. 612 Whenever a mount operation is performed, this condition MUST be 613 validated by the mount client. 615 5.2. YANG structure diagrams 617 YANG data model structure overviews have proven very useful to convey 618 the "Big Picture". It would be useful to indicate in YANG data model 619 structure overviews the fact that a given data node serves as a 620 mountpoint. We propose for this purpose also a corresponding 621 extension to the structure representation convention. Specifically, 622 we propose to prefix the name of the mounting data node with upper- 623 case 'M'. 625 rw network 626 +-- rw nodes 627 +-- rw node [node-ID] 628 +-- rw node-ID 629 +-- M node-system-info 631 5.3. Mountpoint management 633 The YANG module contains facilities to manage the mountpoints 634 themselves. 636 For this purpose, a list of the mountpoints is introduced. Each list 637 element represents a single mountpoint. It includes an 638 identification of the mount target, i.e. the remote system hosting 639 the remote datastore and a definition of the subtree of the remote 640 data node being mounted. It also includes monitoring information 641 about current status (indicating whether the mount has been 642 successful and is operational, or whether an error condition applies 643 such as the target being unreachable or referring to an invalid 644 subtree). 646 In addition to the list of mountpoints, a set of global mount policy 647 settings allows to set parameters such as mount retries and timeouts. 649 Each mountpoint list element also contains a set of the same 650 configuration knobs, allowing administrators to override global mount 651 policies and configure mount policies on a per-mountpoint basis if 652 needed. 654 There are two ways how mounting occurs: automatic (dynamically 655 performed as part of system operation) or manually (administered by a 656 user or client application). A separate mountpoint-origin object is 657 used to distinguish between manually configured and automatically 658 populated mountpoints. 660 Whether mounting occurs automatically or needs to be manually 661 configured by a user or an application can depend on the mountpoint 662 being defined, i.e. the semantics of the model. 664 When configured automatically, mountpoint information is 665 automatically populated by the datastore that implements the 666 mountpoint. The precise mechanisms for discovering mount targets and 667 bootstrapping mount points are provided by the mount client 668 infrastructure and outside the scope of this specification. 669 Likewise, when a mountpoint should be deleted and when it should 670 merely have its mount-status indicate that the target is unreachable 671 is a system-specific implementation decision. 673 Manual mounting consists of two steps. In a first step, a mountpoint 674 is manually configured by a user or client application through 675 administrative action. Once a mountpoint has been configured, actual 676 mounting occurs through an RPCs that is defined specifically for that 677 purpose. To unmount, a separate RPC is invoked; mountpoint 678 configuration information needs to be explicitly deleted. Manual 679 mounting can also be used to override automatic mounting, for example 680 to allow an administrator to set up or remove a mountpoint. 682 It should be noted that mountpoint management does not allow users to 683 manually "extend" the model, i.e. simply add a subtree underneath 684 some arbitrary data node into a datastore, without a supporting 685 mountpoint defined in the model to support it. A mountpoint 686 definition is a formal part of the model with well-defined semantics. 687 Accordingly, mountpoint management does not allow users to 688 dynamically "extend" the data model itself. It allows users to 689 populate the datastore and mount structure within the confines of a 690 model that has been defined prior. 692 The structure of the mountpoint management data model is depicted in 693 the following figure, where brackets enclose list keys, "rw" means 694 configuration, "ro" operational state data, and "?" designates 695 optional nodes. Parantheses enclose choice and case nodes. The 696 figure does not depict all definitions; it is intended to illustrate 697 the overall structure. 699 rw mount-server-mgmt 700 +-- rw mountpoints 701 | +-- rw mountpoint [mountpoint-id] 702 | +-- rw mountpoint-id string 703 | +-- rw mount-target 704 | | +--: (IP) 705 | | | +-- rw target-ip yang:ip-address 706 | | +--: (URI) 707 | | | +-- rw uri yang:uri 708 | | +--: (host-name) 709 | | | +-- rw hostname yang:host 710 | | +-- (node-ID) 711 | | | +-- rw node-info-ref mnt:subtree-ref 712 | | +-- (other) 713 | | +-- rw opaque-target-id string 714 | +-- rw subtree-ref mnt:subtree-ref 715 | +-- ro mountpoint-origin enumeration 716 | +-- ro mount-status mnt:mount-status 717 | +-- rw manual-mount? empty 718 | +-- rw retry-timer? uint16 719 | +-- rw number-of-retries? uint8 720 +-- rw global-mount-policies 721 +-- rw manual-mount? empty 722 +-- rw retry-time? uint16 723 +-- rw number-of-retries? uint8 725 5.4. Caching 727 Under certain circumstances, it can be useful to maintain a cache of 728 remote information. Instead of accessing the remote system, requests 729 are served from a copy that is locally maintained. This is 730 particularly advantageous in cases where data is slow changing, i.e. 731 when there are many more "read" operations than changes to the 732 underlying data node, and in cases when a significant delay were 733 incurred when accessing the remote system, which might be prohibitive 734 for certain applications. Examples of such applications are 735 applications that involve real-time control loops requiring response 736 times that are measured in milliseconds. However, as data nodes that 737 are mounted from an authoritative datastore represent the "golden 738 copy", it is important that any modifications are reflected as soon 739 as they are made. 741 It is a local implementation decision of mount clients whether to 742 cache information once it has been fetched. However, in order to 743 support more powerful caching schemes, it becomes necessary for the 744 mount server to "push" information proactively. For this purpose, it 745 is useful for the mount client to subscribe for updates to the 746 mounted information at the mount server. A corresponding mechanism 747 that can be leveraged for this purpose is specified in 748 [I-D.clemm-netconf-yang-push]. 750 Note that caching large mountpoints can be expensive. Therefore 751 limiting the amount of data unnecessarily passed when mounting near 752 the top of a YANG subtree is important. For these reasons, an 753 ability to specify a particular caching strategy in conjunction with 754 mountpoints can be desirable, including the ability to exclude 755 certain nodes and subtrees from caching. According capabilities may 756 be introduced in a future version of this draft. 758 5.5. Other considerations 760 5.5.1. Authorization 762 Access to mounted information is subject to authorization rules. To 763 the mounted system, a mounting client will in general appear like any 764 other client. Authorization privileges for remote mounting clients 765 need to be specified through NACM (NETCONF Access Control Model) 766 [RFC6536]. 768 5.5.2. Datastore qualification 770 It is conceivable to differentiate between different datastores on 771 the remote server, that is, to designate the name of the actual 772 datastore to mount, e.g. "running" or "startup". However, for the 773 purposes of this spec, we assume that the datastore to be mounted is 774 generally implied. Mounted information is treated as analogous to 775 operational data; in general, this means the running or "effective" 776 datastore is the target. That said, the information which targets to 777 mount does constitute configuration and can hence be part of a 778 startup or candidate datastore. 780 It is conceivable to use mount in conjunction with ephemeral 781 datastores, to address requirements outlined in 782 [I-D.haas-i2rs-netmod-netconf-requirements]. Support for such a 783 scheme is for further study and may be included in a future revision 784 of this spec. 786 5.5.3. Local mounting 788 It is conceivable that the mount target does not reside in a remote 789 datastore, but that data nodes in the same datastore as the 790 mountpoint are targeted for mounting. This amounts to introducing an 791 "aliasing" capability in a datastore. While this is not the scenario 792 that is primarily targeted, it is supported and there may be valid 793 use cases for it. 795 5.5.4. Mount cascades 797 It is possible for the mounted subtree to in turn contain a 798 mountpoint. However, circular mount relationships MUST NOT be 799 introduced. For this reason, a mounted subtree MUST NOT contain a 800 mountpoint that refers back to the mounting system with a mount 801 target that directly or indirectly contains the originating 802 mountpoint. As part of a mount operation, the mount points of the 803 mounted system need to be checked accordingly. 805 5.5.5. Implementation considerations 807 Implementation specifics are outside the scope of this specification. 808 That said, the following considerations apply: 810 Systems that wish to mount information from remote datastores need to 811 implement a mount client. The mount client communicates with a 812 remote system to access the remote datastore. To do so, there are 813 several options: 815 o The mount client acts as a NETCONF client to a remote system. 816 Alternatively, another interface to the remote system can be used, 817 such as a REST API using JSON encodings, as specified in 818 [I-D.ietf-netconf-restconf]. Either way, to the remote system, 819 the mount client constitutes essentially a client application like 820 any other. The mount client in effect IS a special kind of client 821 application. 823 o The mount client communicates with a remote mount server through a 824 separate protocol. The mount server is deployed on the same 825 system as the remote NETCONF datastore and interacts with it 826 through a set of local APIs. 828 o The mount client communicates with a remote mount server that acts 829 as a NETCONF client proxy to a remote system, on the client's 830 behalf. The communication between mount client and remote mount 831 server might involve a separate protocol, which is translated into 832 NETCONF operations by the remote mount server. 834 It is the responsibility of the mount client to manage the 835 association with the target system, e.g. validate it is still 836 reachable by maintaining a permanent association, perform 837 reachability checks in case of a connectionless transport, etc. 839 It is the responsibility of the mount client to manage the 840 mountpoints. This means that the mount client needs to populate the 841 mountpoint monitoring information (e.g. keep mount-status up to data 842 and determine in the case of automatic mounting when to add and 843 remove mountpoint configuration). In the case of automatic mounting, 844 the mount client also interacts with the mountpoint discovery and 845 bootstrap process. 847 The mount client needs to also participate in servicing datastore 848 operations involving mounted information. An operation requested 849 involving a mountpoint is relayed by the mounting system's 850 infrastructure to the mount client. For example, a request to 851 retrieve information from a datastore leads to an invocation of an 852 internal mount client API when a mount point is reached. The mount 853 client then relays a corresponding operation to the remote datastore. 854 It subsequently relays the result along with any responses back to 855 the invoking infrastructure, which then merges the result (e.g. a 856 retrieved subtree with the rest of the information that was 857 retrieved) as needed. Relaying the result may involve the need to 858 transpose error response codes in certain corner cases, e.g. when 859 mounted information could not be reached due to loss of connectivity 860 with the remote server, or when a configuration request failed due to 861 validation error. 863 5.5.6. Modeling best practices 865 There is a certain amount of overhead associated with each mount 866 point. The mount point needs to be managed and state maintained. 867 Data subscriptions need to be maintained. Requests including mounted 868 subtrees need to be decomposed and responses from multiple systems 869 combined. 871 For those reasons, as a general best practice, models that make use 872 of mount points SHOULD be defined in a way that minimizes the number 873 of mountpoints required. Finely granular mounts, in which multiple 874 mountpoints are maintained with the same remote system, each 875 containing only very small data subtrees, SHOULD be avoided. For 876 example, lists SHOULD only contain mountpoints when individual list 877 elements are associated with different remote systems. To mount data 878 from lists in remote datastores, a container node that contains all 879 list elements SHOULD be mounted instead of mounting each list element 880 individually. Likewise, instead of having mount points refer to 881 nodes contained underneath choices, a mountpoint should refer to a 882 container of the choice. 884 6. Datastore mountpoint YANG module 886 887 file "ietf-mount@2015-04-10.yang" 888 module ietf-mount { 889 namespace "urn:ietf:params:xml:ns:yang:ietf-mount"; 890 prefix mnt; 891 import ietf-yang-types { 892 prefix yang; 893 } 895 organization 896 "IETF NETMOD (NETCONF Data Modeling Language) Working Group"; 897 contact 898 "WG Web: http://tools.ietf.org/wg/netmod/ 899 WG List: netmod@ietf.org 901 WG Chair: Juergen Schoenwaelder 902 j.schoenwaelder@jacobs-university.de 904 WG Chair: Tom Nadeau 905 tnadeau@lucidvision.com 907 Editor: Alexander Clemm 908 alex@cisco.com 910 Editor: Jan Medved 911 jmedved@cisco.com 913 Editor: Eric Voit 914 evoit@cisco.com"; 915 description 916 "This module provides a set of YANG extensions and definitions 917 that can be used to mount information from remote datastores."; 919 revision 2015-04-10 { 920 description 921 "Initial revision."; 922 reference 923 "draft-clemm-netmod-mount-03.txt"; 924 } 926 extension mountpoint { 927 argument name; 928 description 929 "This YANG extension is used to mount data from a remote 930 system in place of the node under which this YANG extension 931 statement is used. 933 This extension takes one argument which specifies the name 934 of the mountpoint. 936 This extension can occur as a substatement underneath a 937 container statement, a list statement, or a case statement. 938 As a best practice, it SHOULD occur as statement only 939 underneath a container statement, but it MAY also occur 940 underneath a list or a case statement. 942 The extension takes two parameters, target and subtree, each 943 defined as their own YANG extensions. 944 A mountpoint statement MUST contain a target and a subtree 945 substatement for the mountpoint definition to be valid. 947 The target system MAY be specified in terms of a data node 948 that uses the grouping 'mnt:mount-target'. However, it 949 can be specified also in terms of any other data node that 950 contains sufficient information to address the mount target, 951 such as an IP address, a host name, or a URI. 953 The subtree SHOULD be specified in terms of a data node of 954 type 'mnt:subtree-ref'. The targeted data node MUST 955 represent a container. 957 It is possible for the mounted subtree to in turn contain a 958 mountpoint. However, circular mount relationships MUST NOT 959 be introduced. For this reason, a mounted subtree MUST NOT 960 contain a mountpoint that refers back to the mounting system 961 with a mount target that directly or indirectly contains the 962 originating mountpoint."; 963 } 965 extension target { 966 argument target-name; 967 description 968 "This YANG extension is used to specify a remote target 969 system from which to mount a datastore subtree. This YANG 970 extension takes one argument which specifies the remote 971 system. In general, this argument will contain the name of 972 a data node that contains the remote system information. It 973 is recommended that the reference data node uses the 974 mount-target grouping that is defined further below in this 975 module. 977 This YANG extension can occur only as a substatement below 978 a mountpoint statement. It MUST NOT occur as a substatement 979 below any other YANG statement."; 980 } 982 extension subtree { 983 argument subtree-path; 984 description 985 "This YANG extension is used to specify a subtree in a 986 datastore that is to be mounted. This YANG extension takes 987 one argument which specifies the path to the root of the 988 subtree. The root of the subtree SHOULD represent an 989 instance of a YANG container. However, it MAY represent 990 also another data node. 992 This YANG extension can occur only as a substatement below 993 a mountpoint statement. It MUST NOT occur as a substatement 994 below any other YANG statement."; 995 } 997 feature mount-server-mgmt { 998 description 999 "Provide additional capabilities to manage remote mount 1000 points"; 1001 } 1003 typedef mount-status { 1004 type enumeration { 1005 enum "ok" { 1006 description 1007 "Mounted"; 1008 } 1009 enum "no-target" { 1010 description 1011 "The argument of the mountpoint does not define a 1012 target system"; 1013 } 1014 enum "no-subtree" { 1015 description 1016 "The argument of the mountpoint does not define a 1017 root of a subtree"; 1018 } 1019 enum "target-unreachable" { 1020 description 1021 "The specified target system is currently 1022 unreachable"; 1023 } 1024 enum "mount-failure" { 1025 description 1026 "Any other mount failure"; 1027 } 1028 enum "unmounted" { 1029 description 1030 "The specified mountpoint has been unmounted as the 1031 result of a management operation"; 1032 } 1033 } 1034 description 1035 "This type is used to represent the status of a 1036 mountpoint."; 1037 } 1039 typedef subtree-ref { 1040 type string; 1041 description 1042 "This string specifies a path to a datanode. It corresponds 1043 to the path substatement of a leafref type statement. Its 1044 syntax needs to conform to the corresponding subset of the 1045 XPath abbreviated syntax. Contrary to a leafref type, 1046 subtree-ref allows to refer to a node in a remote datastore. 1047 Also, a subtree-ref refers only to a single node, not a list 1048 of nodes."; 1049 } 1051 grouping mount-monitor { 1052 description 1053 "This grouping contains data nodes that indicate the 1054 current status of a mount point."; 1055 leaf mount-status { 1056 type mount-status; 1057 config false; 1058 description 1059 "Indicates whether a mountpoint has been successfully 1060 mounted or whether some kind of fault condition is 1061 present."; 1062 } 1063 } 1065 grouping mount-target { 1066 description 1067 "This grouping contains data nodes that can be used to 1068 identify a remote system from which to mount a datastore 1069 subtree."; 1070 container mount-target { 1071 description 1072 "A container is used to keep mount target information 1073 together."; 1074 choice target-address-type { 1075 mandatory true; 1076 description 1077 "Allows to identify mount target in different ways, 1078 i.e. using different types of addresses."; 1079 case IP { 1080 leaf target-ip { 1081 type yang:ip-address; 1082 description 1083 "IP address identifying the mount target."; 1084 } 1085 } 1086 case URI { 1087 leaf uri { 1088 type yang:uri; 1089 description 1090 "URI identifying the mount target"; 1091 } 1092 } 1093 case host-name { 1094 leaf hostname { 1095 type yang:host; 1096 description 1097 "Host name of mount target."; 1098 } 1099 } 1100 case node-ID { 1101 leaf node-info-ref { 1102 type subtree-ref; 1103 description 1104 "Node identified by named subtree."; 1105 } 1106 } 1107 case other { 1108 leaf opaque-target-ID { 1109 type string; 1110 description 1111 "Catch-all; could be used also for mounting 1112 of data nodes that are local."; 1113 } 1114 } 1115 } 1116 } 1117 } 1119 grouping mount-policies { 1120 description 1121 "This grouping contains data nodes that allow to configure 1122 policies associated with mountpoints."; 1123 leaf manual-mount { 1124 type empty; 1125 description 1126 "When present, a specified mountpoint is not 1127 automatically mounted when the mount data node is 1128 created, but needs to mounted via specific RPC 1129 invocation."; 1130 } 1131 leaf retry-timer { 1132 type uint16; 1133 units "seconds"; 1134 description 1135 "When specified, provides the period after which 1136 mounting will be automatically reattempted in case of a 1137 mount status of an unreachable target"; 1138 } 1139 leaf number-of-retries { 1140 type uint8; 1141 description 1142 "When specified, provides a limit for the number of 1143 times for which retries will be automatically 1144 attempted"; 1145 } 1146 } 1148 rpc mount { 1149 description 1150 "This RPC allows an application or administrative user to 1151 perform a mount operation. If successful, it will result in 1152 the creation of a new mountpoint."; 1153 input { 1154 leaf mountpoint-id { 1155 type string { 1156 length "1..32"; 1157 } 1158 description 1159 "Identifier for the mountpoint to be created. 1160 The mountpoint-id needs to be unique; 1161 if the mountpoint-id of an existing mountpoint is 1162 chosen, an error is returned."; 1163 } 1164 } 1165 output { 1166 leaf mount-status { 1167 type mount-status; 1168 description 1169 "Indicates if the mount operation was successful."; 1170 } 1171 } 1172 } 1173 rpc unmount { 1174 description 1175 "This RPC allows an application or administrative user to 1176 unmount information from a remote datastore. If successful, 1177 the corresponding mountpoint will be removed from the 1178 datastore."; 1180 input { 1181 leaf mountpoint-id { 1182 type string { 1183 length "1..32"; 1184 } 1185 description 1186 "Identifies the mountpoint to be unmounted."; 1187 } 1188 } 1189 output { 1190 leaf mount-status { 1191 type mount-status; 1192 description 1193 "Indicates if the unmount operation was successful."; 1194 } 1195 } 1196 } 1197 container mount-server-mgmt { 1198 if-feature mount-server-mgmt; 1199 description 1200 "Contains information associated with managing the 1201 mountpoints of a datastore."; 1202 container mountpoints { 1203 description 1204 "Keep the mountpoint information consolidated 1205 in one place."; 1206 list mountpoint { 1207 key "mountpoint-id"; 1208 description 1209 "There can be multiple mountpoints. 1210 Each mountpoint is represented by its own 1211 list element."; 1212 leaf mountpoint-id { 1213 type string { 1214 length "1..32"; 1215 } 1216 description 1217 "An identifier of the mountpoint. 1218 RPC operations refer to the mountpoint 1219 using this identifier."; 1220 } 1221 leaf mountpoint-origin { 1222 type enumeration { 1223 enum "client" { 1224 description 1225 "Mountpoint has been supplied and is 1226 manually administered by a client"; 1227 } 1228 enum "auto" { 1229 description 1230 "Mountpoint is automatically 1231 administered by the server"; 1232 } 1233 } 1234 config false; 1235 description 1236 "This describes how the mountpoint came 1237 into being."; 1238 } 1239 leaf subtree-ref { 1240 type subtree-ref; 1241 mandatory true; 1242 description 1243 "Identifies the root of the subtree in the 1244 target system that is to be mounted."; 1245 } 1246 uses mount-target; 1247 uses mount-monitor; 1248 uses mount-policies; 1249 } 1250 } 1251 container global-mount-policies { 1252 description 1253 "Provides mount policies applicable for all mountpoints, 1254 unless overridden for a specific mountpoint."; 1255 uses mount-policies; 1256 } 1257 } 1258 } 1259 1261 7. Security Considerations 1263 TBD 1265 8. Acknowledgements 1267 We wish to acknowledge the helpful contributions, comments, and 1268 suggestions that were received from Tony Tkacik, Ambika Tripathy, 1269 Robert Varga, Prabhakara Yellai, Shashi Kumar Bansal, Lukas Sedlak, 1270 and Benoit Claise. 1272 9. References 1274 9.1. Normative References 1276 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", RFC 1277 2131, March 1997. 1279 [RFC2866] Rigney, C., "RADIUS Accounting", RFC 2866, June 2000. 1281 [RFC3768] Hinden, R., "Virtual Router Redundancy Protocol (VRRP)", 1282 RFC 3768, April 2004. 1284 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1285 Resource Identifier (URI): Generic Syntax", STD 66, RFC 1286 3986, January 2005. 1288 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 1289 Network Configuration Protocol (NETCONF)", RFC 6020, 1290 October 2010. 1292 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and A. 1293 Bierman, "Network Configuration Protocol (NETCONF)", RFC 1294 6241, June 2011. 1296 [RFC6536] Bierman, A. and M. Bjorklund, "Network Configuration 1297 Protocol (NETCONF) Access Control Model", RFC 6536, March 1298 2012. 1300 [RFC7223] Bjorklund, M., "A YANG Data Model for Interface 1301 Management", RFC 7223, May 2014. 1303 9.2. Informative References 1305 [I-D.clemm-netconf-yang-push] 1306 Clemm, A., Gonzalez Prieto, A., and E. Voit, "Subscribing 1307 to YANG datastore push updates", draft-clemm-netconf-yang- 1308 push-00 (work in progress), March 2015. 1310 [I-D.haas-i2rs-netmod-netconf-requirements] 1311 Haas, J., "I2RS Requirements for Netmod/Netconf", draft- 1312 haas-i2rs-netmod-netconf-requirements-01 (work in 1313 progress), March 2015. 1315 [I-D.ietf-i2rs-pub-sub-requirements] 1316 Clemm, A., Gonzalez Prieto, A., and E. Voit, "Requirements 1317 for subscription to YANG datastores", draft-ietf-i2rs-pub- 1318 sub-requirements-02 (work in progress), March 2015. 1320 [I-D.ietf-netconf-restconf] 1321 Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1322 Protocol", draft-ietf-netconf-restonf-04 (work in 1323 progress), January 2015. 1325 [I-D.voit-netmod-peer-mount-requirements] 1326 Voit, E., Clemm, A., and S. Mertens, "Requirements for 1327 Peer Mounting of YANG subtrees from Remote Datastores", 1328 draft-voit-netmod-peer-mount-requirements-02 (work in 1329 progress), March 2015. 1331 Appendix A. Example 1333 In the following example, we are assuming the use case of a network 1334 controller that wants to provide a controller network view to its 1335 client applications. This view needs to include network abstractions 1336 that are maintained by the controller itself, as well as certain 1337 information about network devices where the network abstractions tie 1338 in with element-specific information. For this purpose, the network 1339 controller leverages the mount capability specified in this document 1340 and presents a fictitious Controller Network YANG Module that is 1341 depicted in the outlined structure below. The example illustrates 1342 how mounted information is leveraged by the mounting datastore to 1343 provide an additional level of information that ties together network 1344 and device abstractions, which could not be provided otherwise 1345 without introducing a (redundant) model to replicate those device 1346 abstractions 1348 rw controller-network 1349 +-- rw topologies 1350 | +-- rw topology [topo-id] 1351 | +-- rw topo-id node-id 1352 | +-- rw nodes 1353 | | +-- rw node [node-id] 1354 | | +-- rw node-id node-id 1355 | | +-- rw supporting-ne network-element-ref 1356 | | +-- rw termination-points 1357 | | +-- rw term-point [tp-id] 1358 | | +-- tp-id tp-id 1359 | | +-- ifref mountedIfRef 1360 | +-- rw links 1361 | +-- rw link [link-id] 1362 | +-- rw link-id link-id 1363 | +-- rw source tp-ref 1364 | +-- rw dest tp-ref 1365 +-- rw network-elements 1366 +-- rw network-element [element-id] 1367 +-- rw element-id element-id 1368 +-- rw element-address 1369 | +-- ... 1370 +-- M interfaces 1372 The controller network model consists of the following key 1373 components: 1375 o A container with a list of topologies. A topology is a graph 1376 representation of a network at a particular layer, for example, an 1377 IS-IS topology, an overlay topology, or an Openflow topology. 1378 Specific topology types can be defined in their own separate YANG 1379 modules that augment the controller network model. Those 1380 augmentations are outside the scope of this example 1382 o An inventory of network elements, along with certain information 1383 that is mounted from each element. The information that is 1384 mounted in this case concerns interface configuration information. 1385 For this purpose, each list element that represents a network 1386 element contains a corresponding mountpoint. The mountpoint uses 1387 as its target the network element address information provided in 1388 the same list element 1390 o Each topology in turn contains a container with a list of nodes. 1391 A node is a network abstraction of a network device in the 1392 topology. A node is hosted on a network element, as indicated by 1393 a network-element leafref. This way, the "logical" and "physical" 1394 aspects of a node in the network are cleanly separated. 1396 o A node also contains a list of termination points that terminate 1397 links. A termination point is implemented on an interface. 1398 Therefore, it contains a leafref that references the corresponding 1399 interface configuration which is part of the mounted information 1400 of a network element. Again, the distinction between termination 1401 points and interfaces provides a clean separation between logical 1402 concepts at the network topology level and device-specific 1403 concepts that are instantiated at the level of a network element. 1404 Because the interface information is mounted from a different 1405 datastore and therefore occurs at a different level of the 1406 containment hierarchy than it would if it were not mounted, it is 1407 not possible to use the interface-ref type that is defined in YANG 1408 data model for interface management [] to allow the termination 1409 point refer to its supporting interface. For this reason, a new 1410 type definition "mountedIfRef" is introduced that allows to refer 1411 to interface information that is mounted and hence has a different 1412 path. 1414 o Finally, a topology also contains a container with a list of 1415 links. A link is a network abstraction that connects nodes via 1416 node termination points. In the example, directional point-to- 1417 point links are depicted in which one node termination point 1418 serves as source, another as destination. 1420 The following is a YANG snippet of the module definition which makes 1421 use of the mountpoint definition. 1423 1424 module controller-network { 1425 namespace "urn:cisco:params:xml:ns:yang:controller-network"; 1426 // example only, replace with IANA namespace when assigned 1427 prefix cn; 1428 import mount { 1429 prefix mnt; 1430 } 1431 import interfaces { 1432 prefix if; 1433 } 1434 ... 1435 typedef mountedIfRef { 1436 type leafref { 1437 path "/cn:controller-network/cn:network-elements/" 1438 +"cn:network-element/cn:interfaces/if:interface/if:name"; 1439 // cn:interfaces corresponds to the mountpoint 1440 } 1441 } 1442 ... 1443 list termination-point { 1444 key "tp-id"; 1445 ... 1446 leaf ifref { 1447 type mountedIfRef; 1448 } 1449 ... 1450 list network-element { 1451 key "element-id"; 1452 leaf element-id { 1453 type element-ID; 1454 } 1455 container element-address { 1456 ... // choice definition that allows to specify 1457 // host name, 1458 // IP addresses, URIs, etc 1459 } 1460 mnt:mountpoint "interfaces" { 1461 mnt:target "./element-address"; 1462 mnt:subtree "/if:interfaces"; 1463 } 1464 ... 1465 } 1466 ... 1467 1469 Finally, the following contains an XML snippet of instantiated YANG 1470 information. We assume three datastores: NE1 and NE2 each have a 1471 datastore (the mount targets) that contains interface configuration 1472 data, which is mounted into NC's datastore (the mount client). 1474 Interface information from NE1 datastore: 1476 1477 1478 fastethernet-1/0 1479 ethernetCsmacd 1480 1/0 1481 1482 1483 fastethernet-1/1 1484 ethernetCsmacd 1485 1/1 1486 1487 1489 Interface information from NE2 datastore: 1490 1491 1492 fastethernet-1/0 1493 ethernetCsmacd 1494 1/0 1495 1496 1497 fastethernet-1/2 1498 ethernetCsmacd 1499 1/2 1500 1501 1503 NC datastore with mounted interface information from NE1 and NE2: 1505 1506 ... 1507 1508 1509 NE1 1510 .... 1511 1512 1513 fastethernet-1/0 1514 ethernetCsmacd 1515 1/0 1516 1517 1518 fastethernet-1/1 1519 ethernetCsmacd 1520 1/1 1521 1522 1523 1524 1525 NE2 1526 .... 1527 1528 1529 fastethernet-1/0 1530 ethernetCsmacd 1531 1/0 1532 1533 1534 fastethernet-1/2 1535 ethernetCsmacd 1536 1/2 1537 1538 1539 1540 1541 ... 1542 1544 Authors' Addresses 1546 Alexander Clemm 1547 Cisco Systems 1549 EMail: alex@cisco.com 1550 Jan Medved 1551 Cisco Systems 1553 EMail: jmedved@cisco.com 1555 Eric Voit 1556 Cisco Systems 1558 EMail: evoit@cisco.com