idnits 2.17.1 draft-clemm-netmod-mount-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 485: '... Servers MAY reject configuration re...' RFC 2119 keyword, line 493: '...ude mount points MAY be rejected. Tha...' RFC 2119 keyword, line 563: '... A mountpoint MUST be contained unde...' RFC 2119 keyword, line 571: '...the mount target SHOULD refer to a con...' RFC 2119 keyword, line 577: '...tively leaf-list SHOULD be mounted ins...' (23 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 677 has weird spacing: '...oint-id strin...' == Line 680 has weird spacing: '...rget-ip yang:...' == Line 682 has weird spacing: '... rw uri yang:...' == Line 684 has weird spacing: '...ostname yang:...' == Line 686 has weird spacing: '...nfo-ref mnt:s...' == (4 more instances...) == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 7, 2014) is 3486 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 3768 (Obsoleted by RFC 5798) ** Obsolete normative reference: RFC 6536 (Obsoleted by RFC 8341) == Outdated reference: A later version (-18) exists of draft-ietf-netconf-restconf-01 Summary: 4 errors (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Clemm 3 Internet-Draft J. Medved 4 Intended status: Experimental E. Voit 5 Expires: April 10, 2015 Cisco Systems 6 October 7, 2014 8 Mounting YANG-Defined Information from Remote Datastores 9 draft-clemm-netmod-mount-02.txt 11 Abstract 13 This document introduces capabilities that allow YANG datastores to 14 reference and incorporate information from remote datastores. This 15 is accomplished by extending YANG with the ability to define mount 16 points that act as references to data nodes in remote datastores, and 17 by providing the necessary means to manage and administer those mount 18 points. This facilitates the development of applications that need 19 to access data that transcends individual network devices while 20 improving network-wide object consistency. 22 This document also lays the groundwork for optional extensions to 23 support subscriptions to remote object updates and transparent 24 caching of objects. These options will speed application peformance 25 without sacrificing data consistency. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on April 10, 2015. 44 Copyright Notice 46 Copyright (c) 2014 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 This document may contain material from IETF Documents or IETF 60 Contributions published or made publicly available before November 61 10, 2008. The person(s) controlling the copyright in some of this 62 material may not have granted the IETF Trust the right to allow 63 modifications of such material outside the IETF Standards Process. 64 Without obtaining an adequate license from the person(s) controlling 65 the copyright in such materials, this document may not be modified 66 outside the IETF Standards Process, and derivative works of it may 67 not be created outside the IETF Standards Process, except to format 68 it for publication as an RFC or to translate it into languages other 69 than English. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 74 2. Definitions and Acronyms . . . . . . . . . . . . . . . . . . 5 75 3. Example scenarios . . . . . . . . . . . . . . . . . . . . . . 6 76 3.1. Network controller view . . . . . . . . . . . . . . . . . 6 77 3.2. Distributed network configuration . . . . . . . . . . . . 8 78 4. Operating on mounted data . . . . . . . . . . . . . . . . . . 9 79 4.1. General principles . . . . . . . . . . . . . . . . . . . 10 80 4.2. Data retrieval . . . . . . . . . . . . . . . . . . . . . 10 81 4.3. Data modification . . . . . . . . . . . . . . . . . . . . 10 82 4.4. RPCs . . . . . . . . . . . . . . . . . . . . . . . . . . 11 83 4.5. Notifications . . . . . . . . . . . . . . . . . . . . . . 11 84 4.6. Other considerations . . . . . . . . . . . . . . . . . . 12 85 5. Data model structure . . . . . . . . . . . . . . . . . . . . 12 86 5.1. YANG mountpoint extensions . . . . . . . . . . . . . . . 12 87 5.2. YANG structure diagrams . . . . . . . . . . . . . . . . . 13 88 5.3. Mountpoint management . . . . . . . . . . . . . . . . . . 13 89 5.4. Caching . . . . . . . . . . . . . . . . . . . . . . . . . 15 90 5.5. Other considerations . . . . . . . . . . . . . . . . . . 17 91 5.5.1. Authorization . . . . . . . . . . . . . . . . . . . . 18 92 5.5.2. Datastore qualification . . . . . . . . . . . . . . . 18 93 5.5.3. Local mounting . . . . . . . . . . . . . . . . . . . 19 94 5.5.4. Mount cascades . . . . . . . . . . . . . . . . . . . 19 95 5.5.5. Implementation considerations . . . . . . . . . . . . 19 96 5.5.6. Modeling best practices . . . . . . . . . . . . . . . 20 98 6. Datastore mountpoint YANG module . . . . . . . . . . . . . . 21 99 7. Security Considerations . . . . . . . . . . . . . . . . . . . 27 100 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 27 101 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 28 102 9.1. Normative References . . . . . . . . . . . . . . . . . . 28 103 9.2. Informative References . . . . . . . . . . . . . . . . . 28 104 Appendix A. Example . . . . . . . . . . . . . . . . . . . . . . 29 106 1. Introduction 108 This document introduces a new capability that allows YANG datastores 109 [RFC6020] to incorporate and reference information from remote 110 datastores. This is provided by introducing a mountpoint concept. 111 This concept allows to declare a YANG data node as a "mount point", 112 under which a remote datastore subtree can be mounted. To the user 113 of the primary datastore, the remote information appears as an 114 integral part of the datastore. It allows remote data nodes and 115 datastore subtrees to be inserted into the local data hierarchy, 116 arranged below local data nodes. The concept is reminiscent of 117 concepts in a Network File System that allows to mount remote folders 118 and make them appear as if they were contained in the local file 119 system of the user's machine. 121 The ability to mount information from remote datastores is new and 122 not covered by existing YANG mechanisms. Until now, management 123 information provided in a datastore has been intrinsically tied to 124 the same server. In contrast, the capability introduced here allows 125 the server to represent information from remote systems as if it were 126 its own and contained in its own local data hierarchy. 128 YANG does provide means by which modules that have been separately 129 defined can reference and augment one another. YANG also does 130 provide means to specify data nodes that reference other data nodes. 131 However, all the data is assumed to be instantiated as part of the 132 same datastore, for example a datastore provided through a NETCONF 133 server [RFC6241]. Existing YANG mechanisms do not account for the 134 possibility that some information that needs to be referred not only 135 resides in a different subtree of the same datastore, or was defined 136 in a separate module that is also instantiated in the same datastore, 137 but that is genuinely part of a different datastore that is provided 138 by a different server. 140 The requirements for mounting YANG subtrees from remote datastores, 141 as long as a set of associated use cases, are documented in 142 [peermount-req]. The ability to mount data from remote datastores is 143 useful to address various problems that several categories of 144 applications are faced with: 146 One category of applications that can leverage this capability 147 concerns network controller applications that need to present a 148 consolidated view of management information in datastores across a 149 network. Controller applications are faced with the problem that in 150 order to expose information, that information needs to be part of 151 their own datastore. Today, this requires support of a corresponding 152 YANG data module. In order to expose information that concerns other 153 network elements, that information has to be replicated into the 154 controller's own datastore in the form of data nodes that may mirror 155 but are clearly distinct from corresponding data nodes in the network 156 element's datastore. In addition, in many cases, a controller needs 157 to impose its own hierarchy on the data that is different from the 158 one that was defined as part of the original module. An example for 159 this concerns interface configuration data, which would be contained 160 in a top-level container in a network element datastore, but may need 161 to be contained in a list in a controller datastore in order to be 162 able to distinguish instances from different network elements under 163 the controller's scope. This in turn would require introduction of 164 redundant YANG modules that effectively replicate the same 165 information save for differences in hierarchy. 167 By directly mounting information from network element datastores, the 168 controller does not need to replicate the same information from 169 multiple datastores, nor does it need to re-define any network 170 element and system-level abstractions to be able to put them in the 171 context of network abstractions. Instead, the subtree of the remote 172 system is attached to the local mount point. Operations that need to 173 access data below the mount point are in effect transparently 174 redirected to remote system, which is the authoritative owner of the 175 data. The mounting system does not even necessarily need to be aware 176 of the specific data in the remote subtree. 178 A second category of applications concerns decentralized networking 179 applications that require globally consistent configuration of 180 parameters. When each network element maintains its own datastore 181 with the same configurable settings, a single global change requires 182 modifying the same information in many network elements across a 183 network. In case of inconsistent configurations, network failures 184 can result that are difficult to troubleshoot. In many cases, what 185 is more desirable is the ability to configure such settings in a 186 single place, then make them available to every network element. 187 Today, this requires in general the introduction of specialized 188 servers and configuration options outside the scope of NETCONF, such 189 as RADIUS [RFC2866] or DHCP [RFC2131]. In order to address this 190 within the scope of NETCONF and YANG, the same information would have 191 to be redundantly modeled and maintained, representing operational 192 data (mirroring some remote server) on some network elements and 193 configuration data on a designated master. Either way, additional 194 complexity ensues. 196 Instead of replicating the same global parameters across different 197 datastores, the solution presented in this document allows a single 198 copy to be maintained in a subtree of single datastore that is then 199 mounted by every network element that requires access to these 200 parameters. The global parameters can be hosted in a controller or a 201 designated network element. This considerably simplifies the 202 management of such parameters that need to be known across elements 203 in a network and require global consistency. 205 The capability of allowing to mount information from remote 206 datastores into another datastore is accomplished by a set of YANG 207 extensions that allow to define such mount points. For this purpose, 208 a new YANG module is introduced. The module defines the YANG 209 extensions, as well as a data model that can be used to manage the 210 mountpoints and mounting process itself. Only the mounting module 211 and server needs to be aware of the concepts introduced here. 212 Mounting is transparent to the models being mounted; any YANG model 213 can be mounted. 215 2. Definitions and Acronyms 217 Data node: An instance of management information in a YANG datastore. 219 DHCP: Dynamic Host Configuration Protocol. 221 Datastore: A conceptual store of instantiated management information, 222 with individual data items represented by data nodes which are 223 arranged in hierarchical manner. 225 Data subtree: An instantiated data node and the data nodes that are 226 hierarchically contained within it. 228 Mount client: The system at which the mount point resides, into which 229 the remote subtree is mounted. 231 Mount point: A data node that receives the root node of the remote 232 datastore being mounted. 234 Mount server: The server with which the mount client communicates and 235 which provides the mount client with access to the mounted 236 information. Can be used synonymously with mount target. 238 Mount target: A remote server whose datastore is being mounted. 240 NACM: NETCONF Access Control Model 241 NETCONF: Network Configuration Protocol 243 RADIUS: Remote Authentication Dial In User Service. 245 RPC: Remote Procedure Call 247 Remote datastore: A datastore residing at a remote node. 249 URI: Uniform Resource Identifier 251 YANG: A data definition language for NETCONF 253 3. Example scenarios 255 The following example scenarios outline some of the ways in which the 256 ability to mount YANG datastores can be applied. Other mount 257 topologies can be conceived in addition to the ones presented here. 259 3.1. Network controller view 261 Network controllers can use the mounting capability to present a 262 consolidated view of management information across the network. This 263 allows network controllers to expose network-wide abstractions, such 264 as topologies or paths, multi-device abstractions, such as VRRP 265 [RFC3768], and network-element specific abstractions, such as 266 information about a network element's interfaces. 268 While an application on top of a controller could bypass the 269 controller to access network elements directly for their element- 270 specific abstractions, this would come at the expense of added 271 inconvenience for the client application. In addition, it would 272 compromise the ability to provide layered architectures in which 273 access to the network by controller applications is truly channeled 274 through the controller. 276 Without a mounting capability, a network controller would need to at 277 least conceptually replicate data from network elements to provide 278 such a view, incorporating network element information into its own 279 controller model that is separate from the network element's, 280 indicating that the information in the controller model is to be 281 populated from network elements. This can introduce issues such as 282 data inconsistency and staleness. Equally importantly, it would lead 283 to the redundant definition of data models: one model that is 284 implemented by the network element itself, and another model to be 285 implemented by the network controller. This leads to poor 286 maintainability, as analogous information has to be redundantly 287 defined and implemented across different data models. In general, 288 controllers cannot simply support the same modules as their network 289 elements for the same information because that information needs to 290 be put into a different context. This leads to "node"-information 291 that needs to be instantiated and indexed differently, because there 292 are multiple instances across different data stores. 294 For example, "system"-level information of a network element would 295 most naturally placed into a top-level container at that network 296 element's datastore. At the same time, the same information in the 297 context of the overall network, such as maintained by a controller, 298 might better be provided in a list. For example, the controller 299 might maintain a list with a list element for each network element, 300 underneath which the network element's system-level information is 301 contained. However, the containment structure of data nodes in a 302 module, once defined, cannot be changed. This means that in the 303 context of a network controller, a second module that repeats the 304 same system-level information would need to be defined, implemented, 305 and maintained. Any augmentations that add additional system-level 306 information to the original module will likewise need to be 307 redundantly defined, once for the "system" module, a second time for 308 the "controller" module. 310 By allowing a network controller to directly mount information from 311 network element datastores, the controller does not need to replicate 312 the same information from multiple datastores. Perhaps even more 313 importantly, the need to re-define any network element and system- 314 level abstractions to be able to put them in the context of network 315 abstractions is avoided. In this solution, a network controller's 316 datastore mounts information from many network element datastores. 317 For example, the network controller datastore could implement a list 318 in which each list element contains a mountpoint. Each mountpoint 319 mounts a subtree from a different network element's datastore. 321 This scenario is depicted in Figure 1. In the figure, M1 is the 322 mountpoint for the datastore in Network Element 1 and M2 is the 323 mountpoint for the datastore in Network Element 2. MDN1 is the 324 mounted data node in Network Element 1, and MDN2 is the mounted data 325 node in Network Element 2. 327 +-------------+ 328 | Network | 329 | Controller | 330 | Datastore | 331 | | 332 | +--N10 | 333 | +--N11 | 334 | +--N12 | 335 | +--M1******************************* 336 | +--M2****** * 337 | | * * 338 +-------------+ * * 339 * +---------------+ * +---------------+ 340 * | +--N1 | * | +--N5 | 341 * | +--N2 | * | +--N6 | 342 ********> +--MDN2 | *********> +--MDN1 | 343 | +--N3 | | +--N7 | 344 | +--N4 | | +--N8 | 345 | | | | 346 | Network | | Network | 347 | Element | | Element | 348 | Datastore | | Datastore | 349 +---------------+ +---------------+ 351 Figure 1: Network controller mount topology 353 3.2. Distributed network configuration 355 A second category of applications concerns decentralized networking 356 applications that require globally consistent configuration of 357 parameters that need to be known across elements in a network. 358 Today, the configuration of such parameters is generally performed on 359 a per network element basis, which is not only redundant but, more 360 importantly, error-prone. Inconsistent configurations lead to 361 erroneous network behavior that can be challenging to troubleshoot. 363 Using the ability to mount information from remote datastores opens 364 up a new possibility for managing such settings. Instead of 365 replicating the same global parameters across different datastores, a 366 single copy is maintained in a subtree of single datastore. This 367 datastore can hosted in a controller or a designated network element. 368 The subtree is subsequently mounted by every network element that 369 requires access to these parameters. 371 In many ways, this category of applications is an inverse of the 372 previous category: Whereas in the network controller case data from 373 many different datastores would be mounted into the same datastore 374 with multiple mountpoints, in this case many elements, each with 375 their own datastore, mount the same remote datastore, which is then 376 mounted by many different systems. 378 The scenario is depicted in Figure 2. In the figure, M1 is the 379 mountpoint for the Network Controller datastore in Network Element 1 380 and M2 is the mountpoint for the Network Controller datastore in 381 Network Element 2. MDN is the mounted data node in the Network 382 Controller datastore that contains the data nodes that represent the 383 shared configuration settings. 385 +---------------+ +---------------+ 386 | Network | | Network | 387 | Element | | Element | 388 | Datastore | | Datastore | 389 | | | | 390 | +--N1 | | +--N5 | 391 | | +--N2 | | | +--N6 | 392 | | +--N2 | | | +--N6 | 393 | | +--N3 | | | +--N7 | 394 | | +--N4 | | | +--N8 | 395 | | | | | | 396 | +--M1 | | +--M2 | 397 +-----*---------+ +-----*---------+ 398 * * +---------------+ 399 * * | | 400 * * | +--N10 | 401 * * | +--N11 | 402 *********************************************> +--MDN | 403 | +--N20 | 404 | +--N21 | 405 | ... | 406 | +--N22 | 407 | | 408 | Network | 409 | Controller | 410 | Datastore | 411 +---------------+ 413 Figure 2: Distributed config settings topology 415 4. Operating on mounted data 417 This section provides a rough illustration of the operations flow 418 involving mounted datastores. 420 4.1. General principles 422 The first thing that should be noted about these operations flows 423 concerns the fact that a mount client essentially constitutes a 424 special management application that interacts with a remote system. 425 To the remote system, the mount client constitutes in effect just 426 another application. The remote system is the authoritative owner of 427 the data. While it is conceivable that the remote system (or an 428 application that proxies for the remote system) provides certain 429 functionality to facilitate the specific needs of the mount client to 430 make it more efficient, the fact that another system decides to 431 expose a certain "view" of that data is fundamentally not the remote 432 system's concern. 434 When a client application makes a request to a server that involves 435 data that is mounted from a remote system, the server will 436 effectively act as a proxy to the remote system on the client 437 application's behalf. It will extract from the client application 438 request the portion that involves the mounted subtree from the remote 439 system. It will strip that portion of the local context, i.e. remove 440 any local data paths and insert the data path of the mounted remote 441 subtree, as appropriate. The server will then forward the transposed 442 request to the remote system that is the authoritative owner of the 443 mounted data, acting itself as a client to the remote server. Upon 444 receiving the reply, the server will transpose the results into the 445 local context as needed, for example map the data paths into the 446 local data tree structure, and combine those results with the results 447 of the remainder portion of the original request. 449 4.2. Data retrieval 451 In the simplest and at the same time perhaps the most common case, 452 the request will involve simple data retrieval. In that case, a 453 "get" or "get-configuration" operation might be applied on a subtree 454 whose scope includes a mount point. When resolving the mount point, 455 the server issues its own "get" or "get-configuration" request 456 against the remote system's subtree that is attached to the mount 457 point. The returned information is then inserted into the data 458 structure that is in turn returned to the client that originally 459 invoked the request. 461 4.3. Data modification 463 Requests that involve editing of information and "writing through" to 464 remote systems are potentially more complicated, particularly if 465 transactions and locking across multiple configuration items are 466 involved. However, these cases are not our primary concern at this 467 time. Data modifications that involve mounted information need to 468 supported only in the following cases: 470 o When the scope of the operation falls within a single mountpoint. 471 In that case, the data modification request (e.g. edit-config) 472 results is directly passed through to the mount server. The mount 473 client acts as a direct pass-through. 475 o When the modification involves no locking and no rollback, i.e. 476 "best effort" semantics. In that case, the scope of the operation 477 may extend beyond a single mountpoint. 479 This functionality is entirely sufficient for most use cases that 480 need to be addressed. As outlined in [peermount-req], the aim for 481 peer mount are use cases for which eventual consistency is sufficient 482 and that do not require transactional consistency. As a result, the 483 implementation is greatly simplified. Support for network-wide 484 transactions and locking in conjunction with mount is not required. 485 Servers MAY reject configuration requests involving commits and 486 rollbacks, where the request involve datastore subtrees which include 487 mount points below the root of the subtree. That said, it is 488 conceivable to introduce in the future a special capability in which 489 servers indicate that they provide such support. 491 By the same token, lock operations that extend across multiple 492 datastores do not need to be supported. Lock requests on subtrees 493 that include mount points MAY be rejected. That said, it is 494 conceivable to introduce in the future a capability indicating that 495 such a capability is supported. In order to perform a lock operation 496 on a subtree that contains mount points, a server will need itself to 497 obtain a lock from each of the respective remote mount servers before 498 confirming the lock. If a lock cannot be obtained within a stringent 499 timeout interval, the lock request will need to be denied and any 500 locks that were already obtained released. 502 4.4. RPCs 504 YANG-Mount is aimed at data nodes in datastores. At this point, it 505 does not extend towards RPCs that are defined as part of YANG modules 506 whose contents is being mounted. Support for RPCs involving mounted 507 portions of the datastore is for further study. 509 4.5. Notifications 511 YANG-Mount does not extend towards notifications. It is conceivable 512 to offer such support in the future; however, at this point 513 notification support involving mounted data nodes is for further 514 study. 516 4.6. Other considerations 518 Since mounted information involves in general communication with a 519 remote system, there is a possibility that the remote system does not 520 respond within a certain amount of time, that connectivity is lost, 521 or that other errors occur. Accordingly, the ability to mount 522 datastores also involves mountpoint management, which includes the 523 ability to configure timeouts, retries, and management of mountpoint 524 state (including dynamic addition removal of mountpoints). 525 Mountpoint management will be discussed in section Section 5.3. 527 It is expected that implementations will introduce caching schemes. 528 Caching can increase performance and efficiency in certain scenarios 529 (for example, in the case of data that is frequently read but that 530 rarely changes), but increases implementation complexity. Caching is 531 not required for YANG-mount to work - in which case all access to 532 mounted information is "on-demand", in which the authoritative data 533 node always gets accessed. Whether to perform caching is a local 534 implementation decision. However, when caching is introduced, it can 535 benefit from additional standardization, specifically the ability to 536 subscribe to updates on remote data by remote servers. Some such 537 optimizations to facilitate caching support will be discussed in 538 section Section 5.4. 540 5. Data model structure 542 5.1. YANG mountpoint extensions 544 At the center of the module is a set of YANG extensions that allow to 545 define a mountpoint. 547 o The first extension, "mountpoint", is used to declare a 548 mountpoint. The extension takes the name of the mountpoint as an 549 argument. 551 o The second extension, "target", serves as a substatement 552 underneath a mountpoint statement. It takes an argument that 553 identifies the target system. The argument is a reference to a 554 data node that contains the information that is needed to identify 555 and address a remote server, such as an IP address, a host name, 556 or a URI [RFC3986]. 558 o The third extension, "subtree", also serves as substatement 559 underneath a mountpoint statement. It takes an argument that 560 defines the root node of the datastore subtree that is to be 561 mounted, specified as string that contains a path expression. 563 A mountpoint MUST be contained underneath a container. Future 564 revisions might allow for mountpoints to be contained underneath 565 other data nodes, such as lists, leaf-lists, and cases. However, to 566 keep things simple, at this point mounting is only allowed directly 567 underneath a container. 569 Only a single data node can be mounted at one time. While the mount 570 target could refer to any data node, it is recommended that as a best 571 practice, the mount target SHOULD refer to a container. It is 572 possible to maintain e.g. a list of mount points, with each mount 573 point each of which has a mount target an element of a remote list. 574 However, to avoid unnecessary proliferation of the number of mount 575 points and associated management overhead, when data from lists or 576 leaf-lists is to be mounted, a container containing the list 577 respectively leaf-list SHOULD be mounted instead of individual list 578 elements. 580 It is possible for a mounted datastore to contain another mountpoint, 581 thus leading to several levels of mount indirections. However, 582 mountpoints MUST NOT introduce circular dependencies. In particular, 583 a mounted datastore MUST NOT contain a mountpoint which specifies the 584 mounting datastore as a target and a subtree which contains as root 585 node a data node that in turn contains the original mountpoint. 586 Whenever a mount operation is performed, this condition mountpoint. 587 Whenever a mount operation is performed, this condition MUST be 588 validated by the mount client. 590 5.2. YANG structure diagrams 592 YANG data model structure overviews have proven very useful to convey 593 the "Big Picture". It would be useful to indicate in YANG data model 594 structure overviews the fact that a given data node serves as a 595 mountpoint. We propose for this purpose also a corresponding 596 extension to the structure representation convention. Specifically, 597 we propose to prefix the name of the mounting data node with upper- 598 case 'M'. 600 rw network 601 +-- rw nodes 602 +-- rw node [node-ID] 603 +-- rw node-ID 604 +-- M node-system-info 606 5.3. Mountpoint management 608 The YANG module contains facilities to manage the mountpoints 609 themselves. 611 For this purpose, a list of the mountpoints is introduced. Each list 612 element represents a single mountpoint. It includes an 613 identification of the mount target, i.e. the remote system hosting 614 the remote datastore and a definition of the subtree of the remote 615 data node being mounted. It also includes monitoring information 616 about current status (indicating whether the mount has been 617 successful and is operational, or whether an error condition applies 618 such as the target being unreachable or referring to an invalid 619 subtree). 621 In addition to the list of mountpoints, a set of global mount policy 622 settings allows to set parameters such as mount retries and timeouts. 624 Each mountpoint list element also contains a set of the same 625 configuration knobs, allowing administrators to override global mount 626 policies and configure mount policies on a per-mountpoint basis if 627 needed. 629 There are two ways how mounting occurs: automatic (dynamically 630 performed as part of system operation) or manually (administered by a 631 user or client application). A separate mountpoint-origin object is 632 used to distinguish between manually configured and automatically 633 populated mountpoints. 635 Whether mounting occurs automatically or needs to be manually 636 configured by a user or an application can depend on the mountpoint 637 being defined, i.e. the semantics of the model. 639 When configured automatically, mountpoint information is 640 automatically populated by the datastore that implements the 641 mountpoint. The precise mechanisms for discovering mount targets and 642 bootstrapping mount points are provided by the mount client 643 infrastructure and outside the scope of this specification. 644 Likewise, when a mountpoint should be deleted and when it should 645 merely have its mount-status indicate that the target is unreachable 646 is a system-specific implementation decision. 648 Manual mounting consists of two steps. In a first step, a mountpoint 649 is manually configured by a user or client application through 650 administrative action. Once a mountpoint has been configured, actual 651 mounting occurs through an RPCs that is defined specifically for that 652 purpose. To unmount, a separate RPC is invoked; mountpoint 653 configuration information needs to be explicitly deleted. Manual 654 mounting can also be used to override automatic mounting, for example 655 to allow an administrator to set up or remove a mountpoint. 657 It should be noted that mountpoint management does not allow users to 658 manually "extend" the model, i.e. simply add a subtree underneath 659 some arbitrary data node into a datastore, without a supporting 660 mountpoint defined in the model to support it. A mountpoint 661 definition is a formal part of the model with well-defined semantics. 662 Accordingly, mountpoint management does not allow users to 663 dynamically "extend" the data model itself. It allows users to 664 populate the datastore and mount structure within the confines of a 665 model that has been defined prior. 667 The structure of the mountpoint management data model is depicted in 668 the following figure, where brackets enclose list keys, "rw" means 669 configuration, "ro" operational state data, and "?" designates 670 optional nodes. Parantheses enclose choice and case nodes. The 671 figure does not depict all definitions; it is intended to illustrate 672 the overall structure. 674 rw mount-server-mgmt 675 +-- rw mountpoints 676 | +-- rw mountpoint [mountpoint-id] 677 | +-- rw mountpoint-id string 678 | +-- rw mount-target 679 | | +--: (IP) 680 | | | +-- rw target-ip yang:ip-address 681 | | +--: (URI) 682 | | | +-- rw uri yang:uri 683 | | +--: (host-name) 684 | | | +-- rw hostname yang:host 685 | | +-- (node-ID) 686 | | | +-- rw node-info-ref mnt:subtree-ref 687 | | +-- (other) 688 | | +-- rw opaque-target-id string 689 | +-- rw subtree-ref mnt:subtree-ref 690 | +-- ro mountpoint-origin enumeration 691 | +-- ro mount-status mnt:mount-status 692 | +-- rw manual-mount? empty 693 | +-- rw retry-timer? uint16 694 | +-- rw number-of-retries? uint8 695 +-- rw global-mount-policies 696 +-- rw manual-mount? empty 697 +-- rw retry-time? uint16 698 +-- rw number-of-retries? uint8 700 5.4. Caching 702 Under certain circumstances, it can be useful to maintain a cache of 703 remote information. Instead of accessing the remote system, requests 704 are served from a copy that is locally maintained. This is 705 particularly advantageous in cases where data is slow changing, i.e. 706 when there are many more "read" operations than changes to the 707 underlying data node, and in cases when a significant delay were 708 incurred when accessing the remote system, which might be prohibitive 709 for certain applications. Examples of such applications are 710 applications that involve real-time control loops requiring response 711 times that are measured in milliseconds. 713 Caching can in principle apply to both retrieval and modification 714 operations. However, as data nodes that are mounted from an 715 authoritative datastore represent the "golden copy", it is important 716 that any modifications are reflected as soon as they are made. 717 Likewise, typical applications that operate on YANG datastores will 718 not apply high frequency changes to the same data nodes. For those 719 reasons, the focus in the following is on caching for data retrieval 720 purposes. Caching for operations that involve change operations are 721 in the following not considered. 723 It is a local implementation decision of mount clients whether to 724 cache information once it has been fetched. However, in order to 725 support more powerful caching schemes, it becomes necessary for the 726 mount server to "push" information proactively. This means that at 727 this point, the mount server is no longer oblivious to the fact that 728 a mount client exists. 730 For this purpose, we are planning in a subsequent revision to 731 introduce caching extensions. The following outlines what these 732 extensions will entail. 734 The first set of extensions concern the mount client. We are adding 735 an extension to mountpoint management that allows a mount client to 736 define a specific binding type for a given mount point. A mount 737 binding specifies how the client wishes to have information from a 738 remote system populated. The following binding types are defined: 740 o On-demand. This is the "default" binding. No caching is applied. 741 Information is always retrieved from the remote datastore, 742 whenever a client application requests it. 744 o Periodic. In this case, the mounted data is updated periodically. 745 The interval in which updates are to take place can be 746 parametrized. 748 o On-change. In this case, mounted data is updated whenever a 749 change is detected. In order to reduce the risk of churn in the 750 case of fast-changing data, a dampening interval can be specified, 751 indicating the minimum time that must pass between updates. 752 Further extensions can allow to specify the magnitude or size a 753 change must indicate in order to be reported. 755 The second set of extensions concern the mount server. NETCONF and 756 RESTconf are fundamentally request-response based protocols. In 757 order to support periodic and, even more so, on-change binding types, 758 it is advantageous if the remote server supports a mechanism that 759 allows a mount client to subscribe to data in a datastore subtree and 760 then have that data be automatically delivered without requiring 761 further requests. Certainly, resorting to polling should be avoided! 762 There are different mechanisms conceivable for this, such as the 763 support of information push or publish/subscribe. 765 Data subscription mechanisms can be of interest beyond YANG-Mount. 766 However, at this point, such a mechanism has not yet been defined. 767 The following outlines one way in which this can be achieved. 769 One way in which this can be achieved is through simple NETCONF 770 notifications and a special data subscription function, whose 771 configuration can be expressed through YANG itself. 773 The notification contains several parameters: 775 o A subscription correlator, referencing the name of the 776 subscription on whose behalf the notification is sent. 778 o A data node that contains a representation of the datastore 779 subtree. (This can be simply a node of type string or, for XML- 780 based encoding, anyxml.) 782 The configuration of the subscription in turn contains several 783 parameters as well: 785 o The root of the data subtree being subscribed to 787 o The identity of the subscriber(s) 789 o The subscription type: periodic or on change 791 o For periodic subscriptions: the start time and interval with which 792 to push updates 794 o For change-based subscriptions: the dampening interval with which 795 to push repeated changes, an indicator for the magnitude of 796 changes, etc 798 5.5. Other considerations 799 5.5.1. Authorization 801 Whether a mount client is allowed to modify information in a mounted 802 datastore or only retrieve it and whether there are certain data 803 nodes or subtrees within the mounted information for which access is 804 restricted is subject to authorization rules. To the mounted system, 805 a mounting client will in general appear like any other client. 806 Authorization privileges for remote mounting clients need to be 807 specified through NACM (NETCONF Access Control Model) [RFC6536]. 809 Users and implementers need to be aware of certain issues when 810 mounted information is modified, not just retrieved. Specifically, 811 in certain corner cases validation of changes made to mounted data 812 may involve constraints that involve information that is not visible 813 to the mounting datastore. This means that in such cases the reason 814 for validation failures may not always be fully understood by the 815 mounting system. 817 Likewise, if the concepts of transactions and locking are applied at 818 the mounting system, these concepts will need to be applied across 819 multiple systems, not just across multiple data nodes within the same 820 system. This capability may not be supported by every 821 implementation. For example, locking a datastore that contains a 822 mountpoint requires that the mount client obtains corresponding locks 823 on the mounted datastore as needed. Any request to acquire a lock on 824 a configuration subtree that includes a mountpoint MUST NOT be 825 granted if the mount client fails to obtain a corresponding lock on 826 the mounted system. Likewise, in case transactions are supported by 827 the mounting system, but not the target system, requests to acquire a 828 lock on a configuration subtree that includes a mountpoint MUST NOT 829 be granted. 831 5.5.2. Datastore qualification 833 It is conceivable to differentiate between different datastores on 834 the remote server, that is, to designate the name of the actual 835 datastore to mount, e.g. "running" or "startup". However, for the 836 purposes of this spec, we assume that the datastore to be mounted is 837 generally implied. Mounted information is treated as analogous to 838 operational data; in general, this means the running or "effective" 839 datastore is the target. That said, the information which targets to 840 mount does constitute configuration and can hence be part of a 841 startup or candidate datastore. 843 It is conceivable to use mount in conjunction with ephemeral 844 datastores, to address requirements outlined in 845 [draft-haas-i2rs-netmod-netconf-requirements]. Support for such a 846 scheme is for further study and may be included in a future revision 847 of this spec. 849 5.5.3. Local mounting 851 It is conceivable that the mount target does not reside in a remote 852 datastore, but that data nodes in the same datastore as the 853 mountpoint are targeted for mounting. This amounts to introducing an 854 "aliasing" capability in a datastore. While this is not the scenario 855 that is primarily targeted, it is supported and there may be valid 856 use cases for it. 858 5.5.4. Mount cascades 860 It is possible for the mounted subtree to in turn contain a 861 mountpoint. However, circular mount relationships MUST NOT be 862 introduced. For this reason, a mounted subtree MUST NOT contain a 863 mountpoint that refers back to the mounting system with a mount 864 target that directly or indirectly contains the originating 865 mountpoint. As part of a mount operation, the mount points of the 866 mounted system need to be checked accordingly. 868 5.5.5. Implementation considerations 870 Implementation specifics are outside the scope of this specification. 871 That said, the following considerations apply: 873 Systems that wish to mount information from remote datastores need to 874 implement a mount client. The mount client communicates with a 875 remote system to access the remote datastore. To do so, there are 876 several options: 878 o The mount client acts as a NETCONF client to a remote system. 879 Alternatively, another interface to the remote system can be used, 880 such as a REST API using JSON encodings, as specified in 881 [I-D.ietf-netconf-restconf]. Either way, to the remote system, 882 the mount client constitutes essentially a client application like 883 any other. The mount client in effect IS a special kind of client 884 application. 886 o The mount client communicates with a remote mount server through a 887 separate protocol. The mount server is deployed on the same 888 system as the remote NETCONF datastore and interacts with it 889 through a set of local APIs. 891 o The mount client communicates with a remote mount server that acts 892 as a NETCONF client proxy to a remote system, on the client's 893 behalf. The communication between mount client and remote mount 894 server might involve a separate protocol, which is translated into 895 NETCONF operations by the remote mount server. 897 It is the responsibility of the mount client to manage the 898 association with the target system, e.g. validate it is still 899 reachable by maintaining a permanent association, perform 900 reachability checks in case of a connectionless transport, etc. 902 It is the responsibility of the mount client to manage the 903 mountpoints. This means that the mount client needs to populate the 904 mountpoint monitoring information (e.g. keep mount-status up to data 905 and determine in the case of automatic mounting when to add and 906 remove mountpoint configuration). In the case of automatic mounting, 907 the mount client also interacts with the mountpoint discovery and 908 bootstrap process. 910 The mount client needs to also participate in servicing datastore 911 operations involving mounted information. An operation requested 912 involving a mountpoint is relayed by the mounting system's 913 infrastructure to the mount client. For example, a request to 914 retrieve information from a datastore leads to an invocation of an 915 internal mount client API when a mount point is reached. The mount 916 client then relays a corresponding operation to the remote datastore. 917 It subsequently relays the result along with any responses back to 918 the invoking infrastructure, which then merges the result (e.g. a 919 retrieved subtree with the rest of the information that was 920 retrieved) as needed. Relaying the result may involve the need to 921 transpose error response codes in certain corner cases, e.g. when 922 mounted information could not be reached due to loss of connectivity 923 with the remote server, or when a configuration request failed due to 924 validation error. 926 5.5.6. Modeling best practices 928 There is a certain amount of overhead associated with each mount 929 point. The mount point needs to be managed and state maintained. 930 Data subscriptions need to be maintained. Requests including mounted 931 subtrees need to be decomposed and responses from multiple systems 932 combined. 934 For those reasons, as a general best practice, models that make use 935 of mount points SHOULD be defined in a way that minimizes the number 936 of mountpoints required. Finely granular mounts, in which multiple 937 mountpoints are maintained with the same remote system, each 938 containing only very small data subtrees, SHOULD be avoided. For 939 example, lists SHOULD only contain mountpoints when individual list 940 elements are associated with different remote systems. To mount data 941 from lists in remote datastores, a container node that contains all 942 list elements SHOULD be mounted instead of mounting each list element 943 individually. Likewise, instead of having mount points refer to 944 nodes contained underneath choices, a mountpoint should refer to a 945 container of the choice. 947 6. Datastore mountpoint YANG module 949 950 file "mount@2014-10-07.yang" 951 module mount { 952 namespace "urn:cisco:params:xml:ns:yang:mount"; 953 // replace with IANA namespace when assigned 955 prefix mnt; 957 import ietf-yang-types { 958 prefix yang; 959 } 961 organization 962 "IETF NETMOD (NETCONF Data Modeling Language) Working Group"; 964 contact 965 "WG Web: http://tools.ietf.org/wg/netmod/ 966 WG List: netmod@ietf.org 968 WG Chair: Juergen Schoenwaelder 969 j.schoenwaelder@jacobs-university.de 971 WG Chair: Tom Nadeau 972 tnadeau@lucidvision.com 974 Editor: Alexander Clemm 975 alex@cisco.com"; 977 description 978 "This module provides a set of YANG extensions and definitions 979 that can be used to mount information from remote datastores."; 981 revision 2014-10-07 { 982 description "Initial revision."; 983 } 985 feature mount-server-mgmt { 986 description 987 "Provide additional capabilities to manage remote mount 988 points"; 989 } 990 extension mountpoint { 991 description 992 "This YANG extension is used to mount data from a remote 993 system in place of the node under which this YANG extension 994 statement is used. 996 This extension takes one argument which specifies the name 997 of the mountpoint. 999 This extension can occur as a substatement underneath a 1000 container statement, a list statement, or a case statement. 1001 As a best practice, it SHOULD occur as statement only 1002 underneath a container statement, but it MAY also occur 1003 underneath a list or a case statement. 1005 The extension takes two parameters, target and subtree, each 1006 defined as their own YANG extensions. 1007 A mountpoint statement MUST contain a target and a subtree 1008 substatement for the mountpoint definition to be valid. 1010 The target system MAY be specified in terms of a data node 1011 that uses the grouping 'mnt:mount-target'. However, it 1012 can be specified also in terms of any other data node that 1013 contains sufficient information to address the mount target, 1014 such as an IP address, a host name, or a URI. 1016 The subtree SHOULD be specified in terms of a data node of 1017 type 'mnt:subtree-ref'. The targeted data node MUST 1018 represent a container. 1020 It is possible for the mounted subtree to in turn contain a 1021 mountpoint. However, circular mount relationships MUST NOT 1022 be introduced. For this reason, a mounted subtree MUST NOT 1023 contain a mountpoint that refers back to the mounting system 1024 with a mount target that directly or indirectly contains the 1025 originating mountpoint."; 1027 argument "name"; 1028 } 1030 extension target { 1031 description 1032 "This YANG extension is used to specify a remote target 1033 system from which to mount a datastore subtree. This YANG 1034 extension takes one argument which specifies the remote 1035 system. In general, this argument will contain the name of 1036 a data node that contains the remote system information. It 1037 is recommended that the reference data node uses the 1038 mount-target grouping that is defined further below in this 1039 module. 1041 This YANG extension can occur only as a substatement below 1042 a mountpoint statement. It MUST NOT occur as a substatement 1043 below any other YANG statement."; 1045 argument "target-name"; 1046 } 1048 extension subtree { 1049 description 1050 "This YANG extension is used to specify a subtree in a 1051 datastore that is to be mounted. This YANG extension takes 1052 one argument which specifies the path to the root of the 1053 subtree. The root of the subtree SHOULD represent an 1054 instance of a YANG container. However, it MAY represent 1055 also another data node. 1057 This YANG extension can occur only as a substatement below 1058 a mountpoint statement. It MUST NOT occur as a substatement 1059 below any other YANG statement."; 1061 argument "subtree-path"; 1062 } 1064 typedef mount-status { 1065 description 1066 "This type is used to represent the status of a 1067 mountpoint."; 1068 type enumeration { 1069 enum ok; { 1070 description 1071 "Mounted"; 1072 } 1073 enum no-target { 1074 description 1075 "The argument of the mountpoint does not define a 1076 target system"; 1077 } 1078 enum no-subtree { 1079 description 1080 "The argument of the mountpoint does not define a 1081 root of a subtree"; 1082 } 1083 enum target-unreachable { 1084 description 1085 "The specified target system is currently 1086 unreachable"; 1087 } 1088 enum mount-failure { 1089 description 1090 "Any other mount failure"; 1091 } 1092 enum unmounted { 1093 description 1094 "The specified mountpoint has been unmounted as the 1095 result of a management operation"; 1096 } 1097 } 1098 } 1099 typedef subtree-ref { 1100 type string; // string pattern to be defined 1101 description 1102 "This string specifies a path to a datanode. It corresponds 1103 to the path substatement of a leafref type statement. Its 1104 syntax needs to conform to the corresponding subset of the 1105 XPath abbreviated syntax. Contrary to a leafref type, 1106 subtree-ref allows to refer to a node in a remote datastore. 1107 Also, a subtree-ref refers only to a single node, not a list 1108 of nodes."; 1109 } 1110 rpc mount { 1111 description 1112 "This RPC allows an application or administrative user to 1113 perform a mount operation. If successful, it will result in 1114 the creation of a new mountpoint."; 1115 input { 1116 leaf mountpoint-id { 1117 type string { 1118 length "1..32"; 1119 } 1120 } 1121 } 1122 output { 1123 leaf mount-status { 1124 type mount-status; 1125 } 1126 } 1127 } 1128 rpc unmount { 1129 "This RPC allows an application or administrative user to 1130 unmount information from a remote datastore. If successful, 1131 the corresponding mountpoint will be removed from the 1132 datastore."; 1133 input { 1134 leaf mountpoint-id { 1135 type string { 1136 length "1..32"; 1137 } 1138 } 1139 } 1140 output { 1141 leaf mount-status { 1142 type mount-status; 1143 } 1144 } 1145 } 1146 grouping mount-monitor { 1147 leaf mount-status { 1148 description 1149 "Indicates whether a mountpoint has been successfully 1150 mounted or whether some kind of fault condition is 1151 present."; 1152 type mount-status; 1153 config false; 1154 } 1155 } 1156 grouping mount-target { 1157 description 1158 "This grouping contains data nodes that can be used to 1159 identify a remote system from which to mount a datastore 1160 subtree."; 1161 container mount-target { 1162 choice target-address-type { 1163 mandatory; 1164 case IP { 1165 leaf target-ip { 1166 type yang:ip-address; 1167 } 1168 case URI { 1169 leaf uri { 1170 type yang:uri; 1171 } 1172 } 1173 case host-name { 1174 leaf hostname { 1175 type yang:host; 1176 } 1177 } 1178 case node-ID { 1179 leaf node-info-ref { 1180 type subtree-ref; 1181 } 1183 } 1184 case other { 1185 leaf opaque-target-ID { 1186 type string; 1187 description 1188 "Catch-all; could be used also for mounting 1189 of data nodes that are local."; 1190 } 1191 } 1192 } 1193 } 1194 } 1195 grouping mount-policies { 1196 description 1197 "This grouping contains data nodes that allow to configure 1198 policies associated with mountpoints."; 1199 leaf manual-mount { 1200 type empty; 1201 description 1202 "When present, a specified mountpoint is not 1203 automatically mounted when the mount data node is 1204 created, but needs to mounted via specific RPC 1205 invocation."; 1206 } 1207 leaf retry-timer { 1208 type uint16; 1209 units "seconds"; 1210 description 1211 "When specified, provides the period after which 1212 mounting will be automatically reattempted in case of a 1213 mount status of an unreachable target"; 1214 } 1215 leaf number-of-retries { 1216 type uint8; 1217 description 1218 "When specified, provides a limit for the number of 1219 times for which retries will be automatically 1220 attempted"; 1221 } 1222 } 1224 container mount-server-mgmt { 1225 if-feature mount-server-mgmt; 1226 container mountpoints { 1227 list mountpoint { 1228 key "mountpoint-id"; 1230 leaf mountpoint-id { 1231 type string { 1232 length "1..32"; 1233 } 1234 } 1235 leaf mountpoint-origin { 1236 type enumeration { 1237 enum client { 1238 description 1239 "Mountpoint has been supplied and is 1240 manually administered by a client"; 1241 } 1242 enum auto { 1243 description 1244 "Mountpoint is automatically 1245 administered by the server"; 1246 } 1247 config false; 1248 } 1249 } 1250 uses mount-target; 1251 leaf subtree-ref { 1252 type subtree-ref; 1253 mandatory; 1254 } 1255 uses mount-monitor; 1256 uses mount-policies; 1257 } 1258 } 1259 container global-mount-policies { 1260 uses mount-policies; 1261 description 1262 "Provides mount policies applicable for all mountpoints, 1263 unless overridden for a specific mountpoint."; 1264 } 1265 } 1266 } 1267 1269 7. Security Considerations 1271 TBD 1273 8. Acknowledgements 1275 We wish to acknowledge the helpful contributions, comments, and 1276 suggestions that were received from Tony Tkacik, Ambika Tripathy, 1277 Robert Varga, Prabhakara Yellai, Shashi Kumar Bansal, Lukas Sedlak, 1278 and Benoit Claise. 1280 9. References 1282 9.1. Normative References 1284 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", RFC 1285 2131, March 1997. 1287 [RFC2866] Rigney, C., "RADIUS Accounting", RFC 2866, June 2000. 1289 [RFC3768] Hinden, R., "Virtual Router Redundancy Protocol (VRRP)", 1290 RFC 3768, April 2004. 1292 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1293 Resource Identifier (URI): Generic Syntax", STD 66, RFC 1294 3986, January 2005. 1296 [RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the 1297 Network Configuration Protocol (NETCONF)", RFC 6020, 1298 October 2010. 1300 [RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J., and A. 1301 Bierman, "Network Configuration Protocol (NETCONF)", RFC 1302 6241, June 2011. 1304 [RFC6536] Bierman, A. and M. Bjorklund, "Network Configuration 1305 Protocol (NETCONF) Access Control Model", RFC 6536, March 1306 2012. 1308 9.2. Informative References 1310 [I-D.ietf-netconf-restconf] 1311 Bierman, A., Bjorklund, M., Watsen, K., and R. Fernando, 1312 "RESTCONF Protocol", draft-ietf-netconf-restconf-01 (work 1313 in progress), July 2014. 1315 [draft-haas-i2rs-netmod-netconf-requirements] 1316 Haas, J., "I2RS Requirements for Netmod/Netconf", draft- 1317 haas-i2rs-netmod-netconf-requirements-02 (work in 1318 progress), September 2014. 1320 [peermount-req] 1321 Voit, E., Clemm, A., Bansal, S., Tripathy, A., and P. 1322 Yellai, "Requirements for Peer Mounting of YANG subtrees 1323 from Remote Datastores", draft-voit-netmod-peer-mount- 1324 requirements-00 (work in progress), September 2014. 1326 Appendix A. Example 1328 In the following example, we are assuming the use case of a network 1329 controller that wants to provide a controller network view to its 1330 client applications. This view needs to include network abstractions 1331 that are maintained by the controller itself, as well as certain 1332 information about network devices where the network abstractions tie 1333 in with element-specific information. For this purpose, the network 1334 controller leverages the mount capability specified in this document 1335 and presents a fictitious Controller Network YANG Module that is 1336 depicted in the outlined structure below. The example illustrates 1337 how mounted information is leveraged by the mounting datastore to 1338 provide an additional level of information that ties together network 1339 and device abstractions, which could not be provided otherwise 1340 without introducing a (redundant) model to replicate those device 1341 abstractions 1343 rw controller-network 1344 +-- rw topologies 1345 | +-- rw topology [topo-id] 1346 | +-- rw topo-id node-id 1347 | +-- rw nodes 1348 | | +-- rw node [node-id] 1349 | | +-- rw node-id node-id 1350 | | +-- rw supporting-ne network-element-ref 1351 | | +-- rw termination-points 1352 | | +-- rw term-point [tp-id] 1353 | | +-- tp-id tp-id 1354 | | +-- ifref mountedIfRef 1355 | +-- rw links 1356 | +-- rw link [link-id] 1357 | +-- rw link-id link-id 1358 | +-- rw source tp-ref 1359 | +-- rw dest tp-ref 1360 +-- rw network-elements 1361 +-- rw network-element [element-id] 1362 +-- rw element-id element-id 1363 +-- rw element-address 1364 | +-- ... 1365 +-- M interfaces 1367 The controller network model consists of the following key 1368 components: 1370 o A container with a list of topologies. A topology is a graph 1371 representation of a network at a particular layer, for example, an 1372 IS-IS topology, an overlay topology, or an Openflow topology. 1373 Specific topology types can be defined in their own separate YANG 1374 modules that augment the controller network model. Those 1375 augmentations are outside the scope of this example 1377 o An inventory of network elements, along with certain information 1378 that is mounted from each element. The information that is 1379 mounted in this case concerns interface configuration information. 1380 For this purpose, each list element that represents a network 1381 element contains a corresponding mountpoint. The mountpoint uses 1382 as its target the network element address information provided in 1383 the same list element 1385 o Each topology in turn contains a container with a list of nodes. 1386 A node is a network abstraction of a network device in the 1387 topology. A node is hosted on a network element, as indicated by 1388 a network-element leafref. This way, the "logical" and "physical" 1389 aspects of a node in the network are cleanly separated. 1391 o A node also contains a list of termination points that terminate 1392 links. A termination point is implemented on an interface. 1393 Therefore, it contains a leafref that references the corresponding 1394 interface configuration which is part of the mounted information 1395 of a network element. Again, the distinction between termination 1396 points and interfaces provides a clean separation between logical 1397 concepts at the network topology level and device-specific 1398 concepts that are instantiated at the level of a network element. 1399 Because the interface information is mounted from a different 1400 datastore and therefore occurs at a different level of the 1401 containment hierarchy than it would if it were not mounted, it is 1402 not possible to use the interface-ref type that is defined in YANG 1403 data model for interface management [] to allow the termination 1404 point refer to its supporting interface. For this reason, a new 1405 type definition "mountedIfRef" is introduced that allows to refer 1406 to interface information that is mounted and hence has a different 1407 path. 1409 o Finally, a topology also contains a container with a list of 1410 links. A link is a network abstraction that connects nodes via 1411 node termination points. In the example, directional point-to- 1412 point links are depicted in which one node termination point 1413 serves as source, another as destination. 1415 The following is a YANG snippet of the module definition which makes 1416 use of the mountpoint definition. 1418 1419 module controller-network { 1420 namespace "urn:cisco:params:xml:ns:yang:controller-network"; 1421 // example only, replace with IANA namespace when assigned 1422 prefix cn; 1423 import mount { 1424 prefix mnt; 1425 } 1426 import interfaces { 1427 prefix if; 1428 } 1429 ... 1430 typedef mountedIfRef { 1431 type leafref { 1432 path "/cn:controller-network/cn:network-elements/" 1433 +"cn:network-element/cn:interfaces/if:interface/if:name"; 1434 // cn:interfaces corresponds to the mountpoint 1435 } 1436 } 1437 ... 1438 list termination-point { 1439 key "tp-id"; 1440 ... 1441 leaf ifref { 1442 type mountedIfRef; 1443 } 1444 ... 1445 list network-element { 1446 key "element-id"; 1447 leaf element-id { 1448 type element-ID; 1449 } 1450 container element-address { 1451 ... // choice definition that allows to specify 1452 // host name, 1453 // IP addresses, URIs, etc 1454 } 1455 mnt:mountpoint "interfaces" { 1456 mnt:target "./element-address"; 1457 mnt:subtree "/if:interfaces"; 1458 } 1459 ... 1460 } 1461 ... 1462 1464 Finally, the following contains an XML snippet of instantiated YANG 1465 information. We assume three datastores: NE1 and NE2 each have a 1466 datastore (the mount targets) that contains interface configuration 1467 data, which is mounted into NC's datastore (the mount client). 1469 Interface information from NE1 datastore: 1471 1472 1473 fastethernet-1/0 1474 ethernetCsmacd 1475 1/0 1476 1477 1478 fastethernet-1/1 1479 ethernetCsmacd 1480 1/1 1481 1482 1484 Interface information from NE2 datastore: 1485 1486 1487 fastethernet-1/0 1488 ethernetCsmacd 1489 1/0 1490 1491 1492 fastethernet-1/2 1493 ethernetCsmacd 1494 1/2 1495 1496 1498 NC datastore with mounted interface information from NE1 and NE2: 1500 1501 ... 1502 1503 1504 NE1 1505 .... 1506 1507 1508 fastethernet-1/0 1509 ethernetCsmacd 1510 1/0 1511 1512 1513 fastethernet-1/1 1514 ethernetCsmacd 1515 1/1 1516 1517 1518 1519 1520 NE2 1521 .... 1522 1523 1524 fastethernet-1/0 1525 ethernetCsmacd 1526 1/0 1527 1528 1529 fastethernet-1/2 1530 ethernetCsmacd 1531 1/2 1532 1533 1534 1535 1536 ... 1537 1539 Authors' Addresses 1541 Alexander Clemm 1542 Cisco Systems 1544 EMail: alex@cisco.com 1545 Jan Medved 1546 Cisco Systems 1548 EMail: jmedved@cisco.com 1550 Eric Voit 1551 Cisco Systems 1553 EMail: evoit@cisco.com