idnits 2.17.1 draft-ietf-nfsv4-flex-files-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 817 has weird spacing: '...stateid lor...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: However, what SHOULD not be involved in that calculation is a perceived network distance between the client and the storage device. The client is better situated for making that determination based on past interaction with the storage device over the different available network interfaces between the two. I.e., the metadata server might not know about a transient outage between the client and storage device because it has no presence on the given subnet. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The metadata server is then responsible for determining if it wants to remove the errant mirror from the layout, if the mirror has recovered from some transient error, etc. When the client tries to get a new layout, the metadata server informs it of the decision by the contents of the layout. The client MUST not make any assumptions that the contents of the previous layout will match those of the new one. If it has updates that were not committed, it MUST resend those updates to all mirrors. -- The document date (December 01, 2014) is 3433 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-41) exists of draft-ietf-nfsv4-minorversion2-28 ** Obsolete normative reference: RFC 3530 (Obsoleted by RFC 7530) ** Obsolete normative reference: RFC 5661 (Obsoleted by RFC 8881) Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 B. Halevy 3 Internet-Draft T. Haynes 4 Intended status: Informational Primary Data 5 Expires: June 4, 2015 December 01, 2014 7 Parallel NFS (pNFS) Flexible File Layout 8 draft-ietf-nfsv4-flex-files-03.txt 10 Abstract 12 The Parallel Network File System (pNFS) allows a separation between 13 the metadata and data for a file. The metadata file access is 14 handled via Network File System version 4 (NFSv4) minor version 1 15 (NFSv4.1) and the data file access is specific to the protocol being 16 used between the client and storage device. The client is informed 17 by the metadata server as to which protocol to use via a Layout Type. 18 The Flexible File Layout Type is defined in this document as an 19 extension to NFSv4.1 to allow the use of storage devices which need 20 not be tightly coupled to the metadata server. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on June 4, 2015. 39 Copyright Notice 41 Copyright (c) 2014 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 57 1.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . 3 58 1.2. Difference Between a Data Server and a Storage Device . . 5 59 1.3. Requirements Language . . . . . . . . . . . . . . . . . . 5 60 2. Coupling of Storage Devices . . . . . . . . . . . . . . . . . 6 61 2.1. LAYOUTCOMMIT . . . . . . . . . . . . . . . . . . . . . . 6 62 2.2. Security Models . . . . . . . . . . . . . . . . . . . . . 6 63 2.3. State and Locking Models . . . . . . . . . . . . . . . . 7 64 3. XDR Description of the Flexible File Layout Type . . . . . . 7 65 3.1. Code Components Licensing Notice . . . . . . . . . . . . 8 66 4. Device Addressing and Discovery . . . . . . . . . . . . . . . 9 67 4.1. ff_device_addr4 . . . . . . . . . . . . . . . . . . . . . 9 68 4.2. Storage Device Multipathing . . . . . . . . . . . . . . . 10 69 5. Flexible File Layout Type . . . . . . . . . . . . . . . . . . 11 70 5.1. ff_layout4 . . . . . . . . . . . . . . . . . . . . . . . 12 71 5.2. Interactions Between Devices and Layouts . . . . . . . . 14 72 6. Striping via Sparse Mapping . . . . . . . . . . . . . . . . . 14 73 7. Recovering from Client I/O Errors . . . . . . . . . . . . . . 15 74 8. Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . 15 75 8.1. Selecting a Mirror . . . . . . . . . . . . . . . . . . . 16 76 8.2. Writing to Mirrors . . . . . . . . . . . . . . . . . . . 17 77 8.3. Metadata Server Resilvering of the File . . . . . . . . . 17 78 9. Flexible Files Layout Type Return . . . . . . . . . . . . . . 17 79 9.1. I/O Error Reporting . . . . . . . . . . . . . . . . . . . 18 80 9.1.1. ff_ioerr4 . . . . . . . . . . . . . . . . . . . . . . 18 81 9.2. Layout Usage Statistics . . . . . . . . . . . . . . . . . 19 82 9.2.1. ff_io_latency4 . . . . . . . . . . . . . . . . . . . 19 83 9.2.2. ff_layoutupdate4 . . . . . . . . . . . . . . . . . . 19 84 9.2.3. ff_iostats4 . . . . . . . . . . . . . . . . . . . . . 20 85 9.3. ff_layoutreturn4 . . . . . . . . . . . . . . . . . . . . 21 86 10. Flexible Files Layout Type LAYOUTERROR . . . . . . . . . . . 21 87 11. Flexible Files Layout Type LAYOUTSTATS . . . . . . . . . . . 21 88 12. Flexible File Layout Type Creation Hint . . . . . . . . . . . 21 89 12.1. ff_layouthint4 . . . . . . . . . . . . . . . . . . . . . 22 90 13. Recalling Layouts . . . . . . . . . . . . . . . . . . . . . . 22 91 13.1. CB_RECALL_ANY . . . . . . . . . . . . . . . . . . . . . 22 92 14. Client Fencing . . . . . . . . . . . . . . . . . . . . . . . 23 93 15. Security Considerations . . . . . . . . . . . . . . . . . . . 24 94 15.1. Kerberized File Access . . . . . . . . . . . . . . . . . 24 95 15.1.1. Loosely Coupled . . . . . . . . . . . . . . . . . . 24 96 15.1.2. Tightly Coupled . . . . . . . . . . . . . . . . . . 25 98 16. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 25 99 17. References . . . . . . . . . . . . . . . . . . . . . . . . . 25 100 17.1. Normative References . . . . . . . . . . . . . . . . . . 25 101 17.2. Informative References . . . . . . . . . . . . . . . . . 26 102 Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . 26 103 Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 26 104 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 27 106 1. Introduction 108 In the parallel Network File System (pNFS), the metadata server 109 returns Layout Type structures that describe where file data is 110 located. There are different Layout Types for different storage 111 systems and methods of arranging data on storage devices. This 112 document defines the Flexible File Layout Type used with file-based 113 data servers that are accessed using the Network File System (NFS) 114 protocols: NFSv3 [RFC1813], NFSv4 [RFC3530], NFSv4.1 [RFC5661], and 115 NFSv4.2 [NFSv42]. 117 To provide a global state model equivalent to that of the Files 118 Layout Type, a back-end control protocol MAY be implemented between 119 the metadata server and NFSv4.1 storage devices. It is out of scope 120 for this document to specify the wire protocol of such a protocol, 121 yet the requirements for the protocol are specified in [RFC5661] and 122 clarified in [pNFSLayouts]. 124 1.1. Definitions 126 control protocol: is a set of requirements for the communication of 127 information on layouts, stateids, file metadata, and file data 128 between the metadata server and the storage devices (see 129 [pNFSLayouts]). 131 Client-side Mirroring: is when the client and not the server is 132 responsible for updating all of the mirrored copies of a file. 134 data file: is that part of the file system object which describes 135 the payload and not the object. E.g., it is the file contents. 137 Data Server (DS): is one of the pNFS servers which provide the 138 contents of a file system object which is a regular file. 139 Depending on the layout, there might be one or more data servers 140 over which the data is striped. Note that while the metadata 141 server is strictly accessed over the NFSv4.1 protocol, depending 142 on the Layout Type, the data server could be accessed via any 143 protocol that meets the pNFS requirements. 145 fencing: is when the metadata server prevents the storage devices 146 from processing I/O from a specific client to a specific file. 148 File Layout Type: is a Layout Type in which the storage devices are 149 accessed via the NFSv4.1 protocol. It is defined in Section 13 of 150 [RFC5661]. 152 layout: informs a client of which storage devices it needs to 153 communicate with (and over which protocol) to perform I/O on a 154 file. The layout might also provide some hints about how the 155 storage is physically organized. 157 layout iomode: describes whether the layout granted to the client is 158 for read or read/write I/O. 160 layout stateid: is a 128-bit quantity returned by a server that 161 uniquely defines the layout state provided by the server for a 162 specific layout that describes a Layout Type and file (see 163 Section 12.5.2 of [RFC5661]). Further, Section 12.5.3 describes 164 the difference between a layout stateid and a normal stateid. 166 Layout Type: describes both the storage protocol used to access the 167 data and the aggregation scheme used to lays out the file data on 168 the underlying storage devices. 170 loose coupling: is when the metadata server and the storage devices 171 do not have a control protocol present. 173 metadata file: is that part of the file system object which 174 describes the object and not the payload. E.g., it could be the 175 time since last modification, access, etc. 177 Metadata Server (MDS): is the pNFS server which provides metadata 178 information for a file system object. It also is responsible for 179 generating layouts for file system objects. Note that the MDS is 180 responsible for directory-based operations. 182 Mirror: is a copy of a file. While mirroring can be used for 183 backing up a file, the copies can be distributed such that each 184 remote site has a locally cached copy. Note that if one copy of 185 the mirror is updated, then all copies must be updated. 187 Object Layout Type: is a Layout Type in which the storage devices 188 are accessed via the OSD protocol [ANSI400-2004]. It is defined 189 in [RFC5664]. 191 recalling a layout: is when the metadata server uses a back channel 192 to inform the client that the layout is to be returned in a 193 graceful manner. Note that the client could be able to flush any 194 writes, etc., before replying to the metadata server. 196 revoking a layout: is when the metadata server invalidates the 197 layout such that neither the metadata server nor any storage 198 device will accept any access from the client with that layout. 200 resilvering: is the act of rebuilding a mirrored copy of a file from 201 a known good copy of the file. Note that this can also be done to 202 create a new mirrored copy of the file. 204 rsize: is the data transfer buffer size used for reads. 206 stateid: is a 128-bit quantity returned by a server that uniquely 207 defines the open and locking states provided by the server for a 208 specific open-owner or lock-owner/open-owner pair for a specific 209 file and type of lock. 211 storage device: is another term used almost interchangeably with 212 data server. See Section 1.2 for the nuances between the two. 214 tight coupling: is when the metadata server and the storage devices 215 do have a control protocol present. 217 wsize: is the data transfer buffer size used for writes. 219 1.2. Difference Between a Data Server and a Storage Device 221 We defined a data server as a pNFS server, which implies that it can 222 utilize the NFSv4.1 protocol to communicate with the client. As 223 such, only the File Layout Type would currently meet this 224 requirement. The more generic concept is a storage device, which can 225 use any protocol to communicate with the client. The requirements 226 for a storage device to act together with the metadata server to 227 provide data to a client are that there is a Layout Type 228 specification for the given protocol and that the metadata server has 229 granted a layout to the client. Note that nothing precludes there 230 being multiple supported Layout Types (i.e., protocols) between a 231 metadata server, storage devices, and client. 233 As storage device is the more encompassing terminology, this document 234 utilizes it over data server. 236 1.3. Requirements Language 238 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 239 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 240 document are to be interpreted as described in [RFC2119]. 242 2. Coupling of Storage Devices 244 The coupling of the metadata server with the storage devices can be 245 either tight or loose. In a tight coupling, there is a control 246 protocol present to manage security, LAYOUTCOMMITs, etc. With a 247 loose coupling, the only control protocol might be a version of NFS. 248 As such, semantics for managing security, state, and locking models 249 MUST be defined. 251 A file is split into metadata and data. The "metadata file" is that 252 part of the file stored on the metadata server. The "data file" is 253 that part of the file stored on the storage device. And the "file" 254 is the combination of the two. 256 2.1. LAYOUTCOMMIT 258 With a tightly coupled system, when the metadata server receives a 259 LAYOUTCOMMIT (see Section 18.42 of [RFC5661]), the semantics of the 260 File Layout Type MUST be met (see Section 12.5.4 of [RFC5661]). With 261 a loosely coupled system, a LAYOUTCOMMIT to the metadata server MUST 262 be proceeded with a COMMIT to the storage device. I.e., it is the 263 responsibility of the client to make sure the data file is stable 264 before the metadata server begins to query the storage devices about 265 the changes to the file. Note that if the client has not done a 266 COMMIT to the storage device, then the LAYOUTCOMMIT might not be 267 synchronized to the last WRITE operation to the storage device. 269 2.2. Security Models 271 With loosely coupled storage devices, the metadata server uses 272 synthetic uids and gids for the data file, where the uid owner of the 273 data file is allowed read/write access and the gid owner is allowed 274 read only access. As part of the layout, the client is provided with 275 the rpc credentials to be used (see ffm_auth in Section 5.1) to 276 access the data file. Fencing off clients is achieved by using 277 SETATTR by the server to change the uid and/or gid owners of the data 278 file to implicitly revoke the outstanding rpc credentials. Note: it 279 is recommended to implement common access control methods at the 280 storage device filesystem exports level to allow only the metadata 281 server root (super user) access to the storage device, and to set the 282 owner of all directories holding data files to the root user. This 283 security method, when using weak auth flavors such as AUTH_SYS, 284 provides a practical model to enforce access control and fence off 285 cooperative clients, but it can not protect against malicious 286 clients; hence it provides a level of security equivalent to NFSv3. 288 With tightly coupled storage devices, the metadata server sets the 289 user and group owners, mode bits, and ACL of the data file to be the 290 same as the metadata file. And the client must authenticate with the 291 storage device and go through the same authorization process it would 292 go through via the metadata server. 294 2.3. State and Locking Models 296 Metadata file OPEN, LOCK, and DELEGATION operations are always 297 executed only against the metadata server. 299 With NFSv4 storage devices, the metadata server, in response to the 300 state changing operation, executes them against the respective data 301 files on the storage devices. It then sends the storage device open 302 stateid as part of the layout (see the ffm_stateid in Section 5.1) 303 and it is then used by the client for executing READ/WRITE operations 304 against the storage device. 306 Standalone NFSv4.1 storage devices that do not return the 307 EXCHGID4_FLAG_USE_PNFS_DS flag to EXCHANGE_ID are used the same way 308 as NFSv4 storage devices. 310 NFSv4.1 clustered storage devices that do identify themselves with 311 the EXCHGID4_FLAG_USE_PNFS_DS flag to EXCHANGE_ID use a back-end 312 control protocol as described in [RFC5661] to implement a global 313 stateid model as defined there. 315 3. XDR Description of the Flexible File Layout Type 317 This document contains the external data representation (XDR) 318 [RFC4506] description of the Flexible File Layout Type. The XDR 319 description is embedded in this document in a way that makes it 320 simple for the reader to extract into a ready-to-compile form. The 321 reader can feed this document into the following shell script to 322 produce the machine readable XDR description of the Flexible File 323 Layout Type: 325 #!/bin/sh 326 grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??' 328 That is, if the above script is stored in a file called "extract.sh", 329 and this document is in a file called "spec.txt", then the reader can 330 do: 332 sh extract.sh < spec.txt > flex_files_prot.x 334 The effect of the script is to remove leading white space from each 335 line, plus a sentinel sequence of "///". 337 The embedded XDR file header follows. Subsequent XDR descriptions, 338 with the sentinel sequence are embedded throughout the document. 340 Note that the XDR code contained in this document depends on types 341 from the NFSv4.1 nfs4_prot.x file [RFC5662]. This includes both nfs 342 types that end with a 4, such as offset4, length4, etc., as well as 343 more generic types such as uint32_t and uint64_t. 345 3.1. Code Components Licensing Notice 347 Both the XDR description and the scripts used for extracting the XDR 348 description are Code Components as described in Section 4 of "Legal 349 Provisions Relating to IETF Documents" [LEGAL]. These Code 350 Components are licensed according to the terms of that document. 352 /// /* 353 /// * Copyright (c) 2012 IETF Trust and the persons identified 354 /// * as authors of the code. All rights reserved. 355 /// * 356 /// * Redistribution and use in source and binary forms, with 357 /// * or without modification, are permitted provided that the 358 /// * following conditions are met: 359 /// * 360 /// * o Redistributions of source code must retain the above 361 /// * copyright notice, this list of conditions and the 362 /// * following disclaimer. 363 /// * 364 /// * o Redistributions in binary form must reproduce the above 365 /// * copyright notice, this list of conditions and the 366 /// * following disclaimer in the documentation and/or other 367 /// * materials provided with the distribution. 368 /// * 369 /// * o Neither the name of Internet Society, IETF or IETF 370 /// * Trust, nor the names of specific contributors, may be 371 /// * used to endorse or promote products derived from this 372 /// * software without specific prior written permission. 373 /// * 374 /// * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS 375 /// * AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED 376 /// * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 377 /// * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 378 /// * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO 379 /// * EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 380 /// * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 381 /// * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 382 /// * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 383 /// * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 384 /// * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 385 /// * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 386 /// * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 387 /// * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF 388 /// * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 389 /// * 390 /// * This code was derived from RFCTBD10. 391 /// * Please reproduce this note if possible. 392 /// */ 393 /// 394 /// /* 395 /// * flex_files_prot.x 396 /// */ 397 /// 398 /// /* 399 /// * The following include statements are for example only. 400 /// * The actual XDR definition files are generated separately 401 /// * and independently and are likely to have a different name. 402 /// * %#include 403 /// * %#include 404 /// */ 405 /// 407 4. Device Addressing and Discovery 409 Data operations to a storage device require the client to know the 410 network address of the storage device. The NFSv4.1 GETDEVICEINFO 411 operation (Section 18.40 of [RFC5661]) is used by the client to 412 retrieve that information. 414 4.1. ff_device_addr4 416 The ff_device_addr4 data structure is returned by the server as the 417 storage protocol specific opaque field da_addr_body in the 418 device_addr4 structure by a successful GETDEVICEINFO operation. 420 /// struct ff_device_addr4 { 421 /// multipath_list4 ffda_netaddrs; 422 /// uint32_t ffda_version; 423 /// uint32_t ffda_minorversion; 424 /// uint32_t ffda_rsize; 425 /// uint32_t ffda_wsize; 426 /// bool ffda_tightly_coupled; 427 /// }; 428 /// 430 The ffda_netaddrs field is used to locate the storage device. It 431 MUST be set by the server to a list holding one or more of the device 432 network addresses. 434 The ffda_version and ffda_minorversion represent the NFS protocol to 435 be used to access the storage device. This layout specification 436 defines the semantics for ffda_versions 3 and 4. If ffda_version 437 equals 3 then server MUST set ffda_minorversion to 0 and the client 438 MUST access the storage device using the NFSv3 protocol [RFC1813]. 439 If ffda_version equals 4 then the server MUST set ffda_minorversion 440 to one of the NFSv4 minor version numbers and the client MUST access 441 the storage device using NFSv4. 443 The ffda_rsize and ffda_wsize are used to communicate the maximum 444 rsize and wsize supported by the storage device. As the storage 445 device can have a different rsize or wsize than the metadata server, 446 the ffda_rsize and ffda_wsize allow the metadata server to 447 communicate that information on behalf of the storage device. 449 ffda_tightly_coupled informs the client as to whether the metadata 450 server is tightly coupled with the storage devices or not. Note that 451 even if the data protocol is at least NFSv4.1, it may still be the 452 case that there is no control protocol present. If 453 ffda_tightly_coupled is not set, then the client MUST commit writes 454 to the storage devices for the file before sending a LAYOUTCOMMIT to 455 the metadata server. I.e., the writes MUST be committed by the 456 client to stable storage via issuing WRITEs with stable_how == 457 FILE_SYNC or by issuing a COMMIT after WRITEs with stable_how != 458 FILE_SYNC (see Section 3.3.7 of [RFC1813]). 460 4.2. Storage Device Multipathing 462 The Flexible File Layout Type supports multipathing to multiple 463 storage device addresses. Storage device level multipathing is used 464 for bandwidth scaling via trunking and for higher availability of use 465 in the case of a storage device failure. Multipathing allows the 466 client to switch to another storage device address which may be that 467 of another storage device that is exporting the same data stripe 468 unit, without having to contact the metadata server for a new layout. 470 To support storage device multipathing, ffda_netaddrs contains an 471 array of one or more storage device network addresses. This array 472 (data type multipath_list4) represents a list of storage device (each 473 identified by a network address), with the possibility that some 474 storage device will appear in the list multiple times. 476 The client is free to use any of the network addresses as a 477 destination to send storage device requests. If some network 478 addresses are less optimal paths to the data than others, then the 479 MDS SHOULD NOT include those network addresses in ffda_netaddrs. If 480 less optimal network addresses exist to provide failover, the 481 RECOMMENDED method to offer the addresses is to provide them in a 482 replacement device-ID-to-device-address mapping, or a replacement 483 device ID. When a client finds no response from the storage device 484 using all addresses available in ffda_netaddrs, it SHOULD send a 485 GETDEVICEINFO to attempt to replace the existing device-ID-to-device- 486 address mappings. If the MDS detects that all network paths 487 represented by ffda_netaddrs are unavailable, the MDS SHOULD send a 488 CB_NOTIFY_DEVICEID (if the client has indicated it wants device ID 489 notifications for changed device IDs) to change the device-ID-to- 490 device-address mappings to the available addresses. If the device ID 491 itself will be replaced, the MDS SHOULD recall all layouts with the 492 device ID, and thus force the client to get new layouts and device ID 493 mappings via LAYOUTGET and GETDEVICEINFO. 495 Generally, if two network addresses appear in ffda_netaddrs, they 496 will designate the same storage device. When the storage device is 497 accessed over NFSv4.1 or higher minor version the two storage device 498 addresses will support the implementation of client ID or session 499 trunking (the latter is RECOMMENDED) as defined in [RFC5661]. The 500 two storage device addresses will share the same server owner or 501 major ID of the server owner. It is not always necessary for the two 502 storage device addresses to designate the same storage device with 503 trunking being used. For example, the data could be read-only, and 504 the data consist of exact replicas. 506 5. Flexible File Layout Type 508 The layout4 type is defined in [RFC5662] as follows: 510 enum layouttype4 { 511 LAYOUT4_NFSV4_1_FILES = 1, 512 LAYOUT4_OSD2_OBJECTS = 2, 513 LAYOUT4_BLOCK_VOLUME = 3, 514 LAYOUT4_FLEX_FILES = 0x80000005 515 [[RFC Editor: please modify the LAYOUT4_FLEX_FILES 516 to be the layouttype assigned by IANA]] 517 }; 519 struct layout_content4 { 520 layouttype4 loc_type; 521 opaque loc_body<>; 522 }; 524 struct layout4 { 525 offset4 lo_offset; 526 length4 lo_length; 527 layoutiomode4 lo_iomode; 528 layout_content4 lo_content; 529 }; 531 This document defines structure associated with the layouttype4 value 532 LAYOUT4_FLEX_FILES. [RFC5661] specifies the loc_body structure as an 533 XDR type "opaque". The opaque layout is uninterpreted by the generic 534 pNFS client layers, but obviously must be interpreted by the Flexible 535 File Layout Type implementation. This section defines the structure 536 of this opaque value, ff_layout4. 538 5.1. ff_layout4 540 /// struct ff_data_server4 { 541 /// deviceid4 ffds_deviceid; 542 /// uint32_t ffds_efficiency; 543 /// stateid4 ffds_stateid; 544 /// nfs_fh4 ffds_fhandle; 545 /// opaque_auth ffds_auth; 546 /// }; 547 /// 549 /// struct ff_mirror4 { 550 /// ff_data_server4 ffm_data_servers<>; 551 /// }; 552 /// 554 /// struct ff_layout4 { 555 /// length4 ffl_stripe_unit; 556 /// ff_mirror4 ffl_mirrors<>; 557 /// }; 558 /// 560 The ff_layout4 structure specifies a layout over a set of mirrored 561 copies of the data file. This mirroring protects against loss of 562 data files. 564 It is possible that the file is concatenated from more than one 565 layout segment. Each layout segment MAY represent different striping 566 parameters, applying respectively only to the layout segment byte 567 range. 569 The ffl_stripe_unit field is the stripe unit size in use for the 570 current layout segment. The number of stripes is given inside each 571 mirror by the number of elements in ffm_data_servers. If the number 572 of stripes is one, then the value for ffl_stripe_unit MUST default to 573 zero. The only supported mapping scheme is sparse and is detailed in 574 Section 6. Note that there is an assumption here that both the 575 stripe unit size and the number of stripes is the same across all 576 mirrors. 578 The ffl_mirrors field is the array of mirrored storage devices which 579 provide the storage for the current stripe, see Figure 1. 581 +-----------+ 582 | | 583 | | 584 | File | 585 | | 586 | | 587 +-----+-----+ 588 | 589 +------------+------------+ 590 | | 591 +----+-----+ +-----+----+ 592 | Mirror 1 | | Mirror 2 | 593 +----+-----+ +-----+----+ 594 | | 595 +-----------+ +-----------+ 596 |+-----------+ |+-----------+ 597 ||+-----------+ ||+-----------+ 598 +|| Storage | +|| Storage | 599 +| Devices | +| Devices | 600 +-----------+ +-----------+ 602 Figure 1 604 The ffs_mirrors field represents an array of state information for 605 each mirrored copy of the file. Each element is described by a 606 ff_mirror4 type. 608 ffds_deviceid provides the deviceid of the storage device holding the 609 data file. 611 ffds_fhandle provides the filehandle of the data file on the given 612 storage device. For tight coupling, ffds_stateid provides the 613 stateid to be used by the client to access the file. For loose 614 coupling and a NFSv4 storage device, the client may use an anonymous 615 stateid to perform I/O on the storage device as there is no use for 616 the metadata server stateid (no control protocol). In such a 617 scenario, the server MUST set the ffds_stateid to be zero. 619 For loosely coupled storage devices, ffds_auth provides the RPC 620 credentials to be used by the client to access the data files. For 621 tightly coupled storage devices, the server SHOULD use the AUTH_NONE 622 flavor and a zero length opaque body to minimize the returned 623 structure length. I.e., if ffda_tightly_coupled (see Section 4.1) is 624 set, then the client MUST ignore ffds_auth in this case. 626 ffds_efficiency describes the metadata server's evaluation as to the 627 effectiveness of each mirror. Note that this is per layout and not 628 per device as the metric may change due to perceived load, 629 availability to the metadata server, etc. Higher values denote 630 higher perceived utility. The way the client can select the best 631 mirror to access is discussed in Section 8.1. 633 5.2. Interactions Between Devices and Layouts 635 In [RFC5661], the File Layout Type is defined such that the 636 relationship between multipathing and filehandles can result in 637 either 0, 1, or N filehandles (see Section 13.3). Some rationals for 638 this are clustered servers which share the same filehandle or 639 allowing for multiple read-only copies of the file on the same 640 storage device. In the Flexible File Layout Type, there is only one 641 filehandle, independent of the multipathing being used. If the 642 metadata server wants to provide multiple read-only copies of the 643 same file on the same storage device, then it should provide multiple 644 ff_device_addr4, each as a mirror. The client can then determine 645 that since the ffds_fhandle are different, then there a multiple 646 copies of the file available. 648 If the metadata server wants to allow access to the file with 649 different versions and/or minor versions of NFS, then for each 650 allowed version and/or minor version, a new ff_device_addr4 must be 651 defined. The client should not assume any relationship (or lack of 652 relationship) between the filehandles presented in ffds_fhandle. 653 I.e., even if the filehandles are binary equivalent for different 654 versions, they may have varying semantics. 656 6. Striping via Sparse Mapping 658 While other Layout Types support both dense and sparse mapping of 659 logical offsets to phyisical offsets within a file (see for example 660 Section 13.4 of [RFC5661]), the Flexible File Layout Type only 661 supports a sparse mapping. 663 With sparse mappings, the logical offset within a file (L) is also 664 the physical offset on the storage device. As detailed in 665 Section 13.4.4 of [RFC5661], this results in holes across each 666 storage device which does not contain the current stripe index. 668 L: logical offset into the file 670 W: stripe width 671 W = number of elements in ffm_data_servers 673 S: number of bytes in a stripe 674 S = W * ffl_stripe_unit 676 N: stripe number 677 N = L / S 679 7. Recovering from Client I/O Errors 681 The pNFS client may encounter errors when directly accessing the 682 storage devices. However, it is the responsibility of the metadata 683 server to recover from the I/O errors. When the LAYOUT4_FLEX_FILES 684 layout type is used, the client MUST report the I/O errors to the 685 server at LAYOUTRETURN time using the ff_ioerr4 structure (see 686 Section 9.1.1). 688 The metadata server analyzes the error and determines the required 689 recovery operations such as recovering media failures or 690 reconstructing missing data files. 692 The metadata server SHOULD recall any outstanding layouts to allow it 693 exclusive write access to the stripes being recovered and to prevent 694 other clients from hitting the same error condition. In these cases, 695 the server MUST complete recovery before handing out any new layouts 696 to the affected byte ranges. 698 Although it MAY be acceptable for the client to propagate a 699 corresponding error to the application that initiated the I/O 700 operation and drop any unwritten data, the client SHOULD attempt to 701 retry the original I/O operation by requesting a new layout using 702 LAYOUTGET and retry the I/O operation(s) using the new layout, or the 703 client MAY just retry the I/O operation(s) using regular NFS READ or 704 WRITE operations via the metadata server. The client SHOULD attempt 705 to retrieve a new layout and retry the I/O operation using the 706 storage device first and only if the error persists, retry the I/O 707 operation via the metadata server. 709 8. Mirroring 711 The Flexible File Layout Type has a simple model in place for the 712 mirroring of files. There is no assumption that each copy of the 713 mirror is stored identically on the storage devices, i.e., one device 714 might employ compression or deduplication on the file. However, the 715 over the wire transfer of the file contents MUST appear identical. 717 Note, this is a construct of the selected XDR representation that 718 each mirrored copy of the file has the same striping pattern (see 719 Figure 1). 721 The metadata server is responsible for determining the number of 722 mirrored copies and the location of each mirror. While the client 723 may provide a hint to how many copies it wants (see Section 12), the 724 metadata server can ignore that hint and in any event, the client has 725 no means to dictate neither the storage device (which also means the 726 coupling and/or protocol levels to access the file) nor the location 727 of said storage device. 729 The updating of mirrored files is done via client-side mirroring. 730 With this approach, the client is responsible for making sure 731 modifications get to all copies of the file it is informed of via the 732 layout. If a file is being resilvered to a storage device, that 733 mirrored copy will not be in the layout. Thus the metadata server 734 MUST update that copy until the client is presented it in a layout. 735 Also, if the client is writing to the file via the metadata server, 736 e.g., using an earlier version of the protocol, then the metadata 737 server MUST update all copies of the mirror. As seen in Section 8.3, 738 during the resilvering, the layout is recalled, and the client has to 739 make modifications via the metadata server. 741 8.1. Selecting a Mirror 743 When the metadata server grants a layout to a client, it can let the 744 client know how fast it expects each mirror to be once the request 745 arrives at the storage devices via the ffds_efficiency member. While 746 the algorithms to calculate that value are left to the metadata 747 server implementations, factors that could contribute to that 748 calculation include speed of the storage device, physical memory 749 available to the device, operating system version, current load, etc. 751 However, what SHOULD not be involved in that calculation is a 752 perceived network distance between the client and the storage device. 753 The client is better situated for making that determination based on 754 past interaction with the storage device over the different available 755 network interfaces between the two. I.e., the metadata server might 756 not know about a transient outage between the client and storage 757 device because it has no presence on the given subnet. 759 As such, it is the client which decides which mirror to access for 760 reading the file. The requirements for writing to a mirrored file 761 are presented below. 763 8.2. Writing to Mirrors 765 The client is responsible for updating all mirrored copies of the 766 file that it is given in the layout. If all but one copy is updated 767 successfully and the last one provides an error, then the client 768 needs to return the layout to the metadata server with an error 769 indicating that the update failed to that storage device. 771 The metadata server is then responsible for determining if it wants 772 to remove the errant mirror from the layout, if the mirror has 773 recovered from some transient error, etc. When the client tries to 774 get a new layout, the metadata server informs it of the decision by 775 the contents of the layout. The client MUST not make any assumptions 776 that the contents of the previous layout will match those of the new 777 one. If it has updates that were not committed, it MUST resend those 778 updates to all mirrors. 780 8.3. Metadata Server Resilvering of the File 782 The metadata server may elect to create a new mirror of the file at 783 any time. This might be to resilver a copy on a storage device which 784 was down for servicing, to provide a copy of the file on storage with 785 different storage performance characteristics, etc. As the client 786 will not be aware of the new mirror and the metadata server will not 787 be aware of updates that the client is making to the file, the 788 metadata server MUST recall the writable layout segment(s) that it is 789 resilvering. If the client issues a LAYOUTGET for a writable layout 790 segment which is in the process of being resilvered, then the 791 metadata server MUST deny that request with a NFS4ERR_LAYOUTTRYLATER. 792 The client can then perform the I/O through the metadata server. 794 9. Flexible Files Layout Type Return 796 layoutreturn_file4 is used in the LAYOUTRETURN operation to convey 797 layout-type specific information to the server. It is defined in 798 [RFC5661] as follows: 800 struct layoutreturn_file4 { 801 offset4 lrf_offset; 802 length4 lrf_length; 803 stateid4 lrf_stateid; 804 /* layouttype4 specific data */ 805 opaque lrf_body<>; 806 }; 807 union layoutreturn4 switch(layoutreturn_type4 lr_returntype) { 808 case LAYOUTRETURN4_FILE: 809 layoutreturn_file4 lr_layout; 810 default: 811 void; 812 }; 814 struct LAYOUTRETURN4args { 815 /* CURRENT_FH: file */ 816 bool lora_reclaim; 817 layoutreturn_stateid lora_recallstateid; 818 layouttype4 lora_layout_type; 819 layoutiomode4 lora_iomode; 820 layoutreturn4 lora_layoutreturn; 821 }; 823 If the lora_layout_type layout type is LAYOUT4_FLEX_FILES, then the 824 lrf_body opaque value is defined by ff_layoutreturn4 (See 825 Section 9.3). It allows the client to report I/O error information 826 or layout usage statistics back to the metadata server as defined 827 below. 829 9.1. I/O Error Reporting 831 9.1.1. ff_ioerr4 833 /// struct ff_ioerr4 { 834 /// offset4 ffie_offset; 835 /// length4 ffie_length; 836 /// stateid4 ffie_stateid; 837 /// device_error4 ffie_errors; 838 /// }; 839 /// 841 Recall that [NFSv42] defines device_error4 as: 843 struct device_error4 { 844 deviceid4 de_deviceid; 845 nfsstat4 de_status; 846 nfs_opnum4 de_opnum; 847 }; 849 The ff_ioerr4 structure is used to return error indications for data 850 files that generated errors during data transfers. These are hints 851 to the metadata server that there are problems with that file. For 852 each error, ffie_errors.de_deviceid, ffie_offset, and ffie_length 853 represent the storage device and byte range within the file in which 854 the error occurred; ffie_errors represents the operation and type of 855 error. The use of device_error4 is described in Section 15.6 of 856 [NFSv42]. 858 Even though the storage device might be accessed via NFSv3 and 859 reports back NFSv3 errors to the client, the client is responsible 860 for mapping these to appropriate NFSv4 status codes as de_status. 861 Likewise, the NFSv3 operations need to be mapped to equivalent NFSv4 862 operations. 864 9.2. Layout Usage Statistics 866 9.2.1. ff_io_latency4 868 /// struct ff_io_latency4 { 869 /// nfstime4 ffil_min; 870 /// nfstime4 ffil_max; 871 /// nfstime4 ffil_avg; 872 /// uint32_t ffil_count; 873 /// }; 874 /// 876 When determining latencies, the client can collect the minimum via 877 ffil_min, the maximum via ffil_max, and the average via ffil_avg. 878 Further, ffil_count relates how many data points were collected in 879 the reported period. 881 9.2.2. ff_layoutupdate4 883 /// struct ff_layoutupdate4 { 884 /// netaddr4 ffl_addr; 885 /// nfs_fh4 ffl_fhandle; 886 /// ff_io_latency4 ffl_read; 887 /// ff_io_latency4 ffl_write; 888 /// uint32_t ffl_queue_depth; 889 /// nfstime4 ffl_duration; 890 /// bool ffl_local; 891 /// }; 892 /// 894 ffl_addr differentiates which network address the client connected to 895 on the storage device. In the case of multipathing, ffl_fhandle 896 indicates which read-only copy was selected. ffl_read and ffl_write 897 convey the latencies respectively for both read and write operations. 898 ffl_queue_depth can be used to indicate how long the I/O had to wait 899 on internal queues before being serviced. ffl_duration is used to 900 indicate the time period over which the statistics were collected. 901 ffl_local if true indicates that the I/O was serviced by the client's 902 cache. This flag allows the client to inform the metadata server 903 about "hot" access to a file it would not normally be allowed to 904 report on. 906 9.2.3. ff_iostats4 908 /// struct ff_iostats4 { 909 /// offset4 ffis_offset; 910 /// length4 ffis_length; 911 /// stateid4 ffis_stateid; 912 /// io_info4 ffis_read; 913 /// io_info4 ffis_write; 914 /// deviceid4 ffis_deviceid; 915 /// layoutupdate4 ffis_layoutupdate; 916 /// }; 917 /// 919 Recall that [NFSv42] defines io_info4 as: 921 struct io_info4 { 922 uint32_t ii_count; 923 uint64_t ii_bytes; 924 }; 926 With pNFS, the data transfers are performed directly between the pNFS 927 client and the storage devices. Therefore, the metadata server has 928 no visibility to the I/O stream and cannot use any statistical 929 information about client I/O to optimize data storage location. 930 ff_iostats4 MAY be used by the client to report I/O statistics back 931 to the metadata server upon returning the layout. Since it is 932 infeasible for the client to report every I/O that used the layout, 933 the client MAY identify "hot" byte ranges for which to report I/O 934 statistics. The definition and/or configuration mechanism of what is 935 considered "hot" and the size of the reported byte range is out of 936 the scope of this document. It is suggested for client 937 implementation to provide reasonable default values and an optional 938 run-time management interface to control these parameters. For 939 example, a client can define the default byte range resolution to be 940 1 MB in size and the thresholds for reporting to be 1 MB/second or 10 941 I/O operations per second. For each byte range, ffis_offset and 942 ffis_length represent the starting offset of the range and the range 943 length in bytes. ffis_read.ii_count, ffis_read.ii_bytes, 944 ffis_write.ii_count, and ffis_write.ii_bytes represent, respectively, 945 the number of contiguous read and write I/Os and the respective 946 aggregate number of bytes transferred within the reported byte range. 948 The combination of ffis_deviceid and ffl_addr uniquely identify both 949 the storage path and the network route to it. Additionally, the 950 ffis_deviceid informs the metadata server as to the version and/or 951 minor version being used for I/O to the storage device. Finally, the 952 ffl_fhandle allows the metadata server to differentiate between 953 multiple read-only copies of the file on the same storage device. 955 9.3. ff_layoutreturn4 957 /// struct ff_layoutreturn4 { 958 /// ff_ioerr4 fflr_ioerr_report<>; 959 /// ff_iostats4 fflr_iostats_report<>; 960 /// }; 961 /// 963 When data file I/O operations fail, fflr_ioerr_report<> is used to 964 report these errors to the metadata server as an array of elements of 965 type ff_ioerr4. Each element in the array represents an error that 966 occurred on the data file identified by ffie_errors.de_deviceid. If 967 no errors are to be reported, the size of the fflr_ioerr_report<> 968 array is set to zero. The client MAY also use fflr_iostats_report<> 969 to report a list of I/O statistics as an array of elements of type 970 ff_iostats4. Each element in the array represents statistics for a 971 particular byte range. Byte ranges are not guaranteed to be disjoint 972 and MAY repeat or intersect. 974 10. Flexible Files Layout Type LAYOUTERROR 976 If the client is using NFSv4.2 to communicate with the metadata 977 server, then instead of waiting for a LAYOUTRETURN to send error 978 information to the metadata server (see Section 9.1), it can use 979 LAYOUTERROR (see Section 15.6 of [NFSv42]) to communicate that 980 information. For the Flexible Files Layout Type, this means that 981 LAYOUTERROR4args is treated the same as ff_ioerr4. 983 11. Flexible Files Layout Type LAYOUTSTATS 985 If the client is using NFSv4.2 to communicate with the metadata 986 server, then instead of waiting for a LAYOUTRETURN to send I/O 987 statistics to the metadata server (see Section 9.2), it can use 988 LAYOUTSTATS (see Section 15.7 of [NFSv42]) to communicate that 989 information. For the Flexible Files Layout Type, this means that 990 LAYOUTSTATS4args.lsa_layoutupdate is overloaded with the same 991 contents as in ffis_layoutupdate. 993 12. Flexible File Layout Type Creation Hint 995 The layouthint4 type is defined in the [RFC5661] as follows: 997 struct layouthint4 { 998 layouttype4 loh_type; 999 opaque loh_body<>; 1000 }; 1002 The layouthint4 structure is used by the client to pass a hint about 1003 the type of layout it would like created for a particular file. If 1004 the loh_type layout type is LAYOUT4_FLEX_FILES, then the loh_body 1005 opaque value is defined by the ff_layouthint4 type. 1007 12.1. ff_layouthint4 1009 /// union ff_mirrors_hint switch (bool ffmc_valid) { 1010 /// case TRUE: 1011 /// uint32_t ffmc_mirrors; 1012 /// case FALSE: 1013 /// void; 1014 /// }; 1015 /// 1017 /// struct ff_layouthint4 { 1018 /// ff_mirrors_hint fflh_mirrors_hint; 1019 /// }; 1020 /// 1022 This type conveys hints for the desired data map. All parameters are 1023 optional so the client can give values for only the parameter it 1024 cares about. 1026 13. Recalling Layouts 1028 The Flexible File Layout Type metadata server should recall 1029 outstanding layouts in the following cases: 1031 o When the file's security policy changes, i.e., Access Control 1032 Lists (ACLs) or permission mode bits are set. 1034 o When the file's layout changes, rendering outstanding layouts 1035 invalid. 1037 o When there are sharing conflicts. 1039 13.1. CB_RECALL_ANY 1041 The metadata server can use the CB_RECALL_ANY callback operation to 1042 notify the client to return some or all of its layouts. The 1043 [RFC5661] defines the following types: 1045 const RCA4_TYPE_MASK_FF_LAYOUT_MIN = -2; 1046 const RCA4_TYPE_MASK_FF_LAYOUT_MAX = -1; 1047 [[RFC Editor: please insert assigned constants]] 1049 struct CB_RECALL_ANY4args { 1050 uint32_t craa_layouts_to_keep; 1051 bitmap4 craa_type_mask; 1052 }; 1054 [[AI13: No, 5661 does not define these above values. The ask here is 1055 to create these and _add_ them to 5661. --TH]] 1057 Typically, CB_RECALL_ANY will be used to recall client state when the 1058 server needs to reclaim resources. The craa_type_mask bitmap 1059 specifies the type of resources that are recalled and the 1060 craa_layouts_to_keep value specifies how many of the recalled 1061 Flexible File Layouts the client is allowed to keep. The Flexible 1062 File Layout Type mask flags are defined as follows: 1064 /// enum ff_cb_recall_any_mask { 1065 /// FF_RCA4_TYPE_MASK_READ = -2, 1066 /// FF_RCA4_TYPE_MASK_RW = -1 1067 [[RFC Editor: please insert assigned constants]] 1068 /// }; 1069 /// 1071 They represent the iomode of the recalled layouts. In response, the 1072 client SHOULD return layouts of the recalled iomode that it needs the 1073 least, keeping at most craa_layouts_to_keep Flexible File Layouts. 1075 The PNFS_FF_RCA4_TYPE_MASK_READ flag notifies the client to return 1076 layouts of iomode LAYOUTIOMODE4_READ. Similarly, the 1077 PNFS_FF_RCA4_TYPE_MASK_RW flag notifies the client to return layouts 1078 of iomode LAYOUTIOMODE4_RW. When both mask flags are set, the client 1079 is notified to return layouts of either iomode. 1081 14. Client Fencing 1083 In cases where clients are uncommunicative and their lease has 1084 expired or when clients fail to return recalled layouts within a 1085 lease period, at the least the server MAY revoke client layouts and/ 1086 or device address mappings and reassign these resources to other 1087 clients (see "Recalling a Layout" in [RFC5661]). To avoid data 1088 corruption, the metadata server MUST fence off the revoked clients 1089 from the respective data files as described in Section 2.2. 1091 15. Security Considerations 1093 The pNFS extension partitions the NFSv4 file system protocol into two 1094 parts, the control path and the data path (storage protocol). The 1095 control path contains all the new operations described by this 1096 extension; all existing NFSv4 security mechanisms and features apply 1097 to the control path. The combination of components in a pNFS system 1098 is required to preserve the security properties of NFSv4 with respect 1099 to an entity accessing data via a client, including security 1100 countermeasures to defend against threats that NFSv4 provides 1101 defenses for in environments where these threats are considered 1102 significant. 1104 The metadata server enforces the file access-control policy at 1105 LAYOUTGET time. The client should use suitable authorization 1106 credentials for getting the layout for the requested iomode (READ or 1107 RW) and the server verifies the permissions and ACL for these 1108 credentials, possibly returning NFS4ERR_ACCESS if the client is not 1109 allowed the requested iomode. If the LAYOUTGET operation succeeds 1110 the client receives, as part of the layout, a set of credentials 1111 allowing it I/O access to the specified data files corresponding to 1112 the requested iomode. When the client acts on I/O operations on 1113 behalf of its local users, it MUST authenticate and authorize the 1114 user by issuing respective OPEN and ACCESS calls to the metadata 1115 server, similar to having NFSv4 data delegations. If access is 1116 allowed, the client uses the corresponding (READ or RW) credentials 1117 to perform the I/O operations at the data files storage devices. 1118 When the metadata server receives a request to change a file's 1119 permissions or ACL, it SHOULD recall all layouts for that file and it 1120 MUST fence off the clients holding outstanding layouts for the 1121 respective file by implicitly invalidating the outstanding 1122 credentials on all data files comprising before committing to the new 1123 permissions and ACL. Doing this will ensure that clients re- 1124 authorize their layouts according to the modified permissions and ACL 1125 by requesting new layouts. Recalling the layouts in this case is 1126 courtesy of the server intended to prevent clients from getting an 1127 error on I/Os done after the client was fenced off. 1129 15.1. Kerberized File Access 1131 15.1.1. Loosely Coupled 1133 Under this coupling model, the principal used to authenticate the 1134 metadata file is different than that used to authenticate the data 1135 file. I.e., the synthetic principals generated to control access to 1136 the data file could prove to be difficult to manage. 1138 While RPCSEC_GSS version 3 (RPCSEC_GSSv3) [rpcsec_gssv3] could be 1139 used to authorize the client to the storage device on behalf of the 1140 metadata server, such a requirement exceeds the loose coupling model. 1141 I.e., each of the metadata server, storage device, and client would 1142 have to implement RPCSEC_GSSv3. 1144 In all, while either an elaborate schema could be used to 1145 automatically authenticate principals or RPCSEC_GSSv3 aware clients, 1146 metadata server, and storage devices could be deployed, if more 1147 secure authentication is desired, tight coupling should be considered 1148 as described in the next section. 1150 15.1.2. Tightly Coupled 1152 With tight coupling, the principal used to access the metadata file 1153 is exactly the same as used to access the data file. Thus there are 1154 no security issues related to kerberization of a tightly coupled 1155 system. 1157 16. IANA Considerations 1159 As described in [RFC5661], new layout type numbers have been assigned 1160 by IANA. This document defines the protocol associated with the 1161 existing layout type number, LAYOUT4_FLEX_FILES. 1163 17. References 1165 17.1. Normative References 1167 [LEGAL] IETF Trust, "Legal Provisions Relating to IETF Documents", 1168 November 2008, . 1171 [NFSv42] Haynes, T., "NFS Version 4 Minor Version 2", draft-ietf- 1172 nfsv4-minorversion2-28 (Work In Progress), November 2014. 1174 [RFC1813] IETF, "NFS Version 3 Protocol Specification", RFC 1813, 1175 June 1995. 1177 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1178 Requirement Levels", BCP 14, RFC 2119, March 1997. 1180 [RFC3530] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., 1181 Beame, C., Eisler, M., and D. Noveck, "Network File System 1182 (NFS) version 4 Protocol", RFC 3530, April 2003. 1184 [RFC4506] Eisler, M., "XDR: External Data Representation Standard", 1185 STD 67, RFC 4506, May 2006. 1187 [RFC5661] Shepler, S., Ed., Eisler, M., Ed., and D. Noveck, Ed., 1188 "Network File System (NFS) Version 4 Minor Version 1 1189 Protocol", RFC 5661, January 2010. 1191 [RFC5662] Shepler, S., Ed., Eisler, M., Ed., and D. Noveck, Ed., 1192 "Network File System (NFS) Version 4 Minor Version 1 1193 External Data Representation Standard (XDR) Description", 1194 RFC 5662, January 2010. 1196 [RFC5664] Halevy, B., Ed., Welch, B., Ed., and J. Zelenka, Ed., 1197 "Object-Based Parallel NFS (pNFS) Operations", RFC 5664, 1198 January 2010. 1200 [pNFSLayouts] 1201 Haynes, T., "Considerations for a New pNFS Layout Type", 1202 draft-ietf-nfsv4-layout-types-02 (Work In Progress), 1203 October 2014. 1205 17.2. Informative References 1207 [ANSI400-2004] 1208 Weber, R., Ed., "ANSI INCITS 400-2004, Information 1209 Technology - SCSI Object-Based Storage Device Commands 1210 (OSD)", December 2004. 1212 [rpcsec_gssv3] 1213 Adamson, W. and N. Williams, "Remote Procedure Call (RPC) 1214 Security Version 3", November 2014. 1216 Appendix A. Acknowledgments 1218 Those who provided miscellaneous comments to early drafts of this 1219 document include: Matt W. Benjamin, Adam Emerson, Tom Haynes, J. 1220 Bruce Fields, and Lev Solomonov. 1222 Appendix B. RFC Editor Notes 1224 [RFC Editor: please remove this section prior to publishing this 1225 document as an RFC] 1227 [RFC Editor: prior to publishing this document as an RFC, please 1228 replace all occurrences of RFCTBD10 with RFCxxxx where xxxx is the 1229 RFC number of this document] 1231 Authors' Addresses 1233 Benny Halevy 1234 Primary Data, Inc. 1236 Email: bhalevy@primarydata.com 1237 URI: http://www.primarydata.com 1239 Thomas Haynes 1240 Primary Data, Inc. 1241 4300 El Camino Real Ste 100 1242 Los Altos, CA 94022 1243 USA 1245 Phone: +1 408 215 1519 1246 Email: thomas.haynes@primarydata.com