idnits 2.17.1 draft-bhalevy-nfsv4-flex-files-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 450 has weird spacing: '...pattern pfl...' == Line 845 has weird spacing: '...stateid lor...' == Line 1025 has weird spacing: '...pattern pfs...' == Line 1034 has weird spacing: '...rn_hint pflh_...' -- The document date (October 20, 2013) is 3841 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 1813 (ref. '1') ** Obsolete normative reference: RFC 3530 (ref. '2') (Obsoleted by RFC 7530) ** Obsolete normative reference: RFC 5661 (ref. '3') (Obsoleted by RFC 8881) -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' -- Possible downref: Non-RFC (?) normative reference: ref. '10' -- Possible downref: Non-RFC (?) normative reference: ref. '11' -- Possible downref: Non-RFC (?) normative reference: ref. '12' Summary: 3 errors (**), 0 flaws (~~), 5 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 B. Halevy 3 Internet-Draft Primary Data 4 Intended status: Standards Track October 20, 2013 5 Expires: April 23, 2014 7 Parallel NFS (pNFS) Flexible Files Layout 8 draft-bhalevy-nfsv4-flex-files-01 10 Abstract 12 Parallel NFS (pNFS) extends Network File System version 4 (NFSv4) to 13 allow clients to directly access file data on the storage used by the 14 NFSv4 server. This ability to bypass the server for data access can 15 increase both performance and parallelism, but requires additional 16 client functionality for data access, some of which is dependent on 17 the class of storage used, a.k.a. the Layout Type. The main pNFS 18 operations and data types in NFSv4 Minor version 1 specify a layout- 19 type-independent layer; layout-type-specific information is conveyed 20 using opaque data structures whose internal structure is further 21 defined by the particular layout type specification. This document 22 specifies the NFSv4.1 Flexible Files pNFS Layout as a companion to 23 the main NFSv4 Minor version 1 specification for use of pNFS with 24 Data Servers over NFSv4 or higher minor versions using flexible, per- 25 file striping topology. 27 Status of this Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on April 23, 2014. 44 Copyright Notice 46 Copyright (c) 2013 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 63 2. Method of Operation . . . . . . . . . . . . . . . . . . . . . 4 64 2.1. Security models . . . . . . . . . . . . . . . . . . . . . 5 65 2.2. State and Locking Models . . . . . . . . . . . . . . . . . 5 66 3. XDR Description of the Flexible Files Layout Protocol . . . . 6 67 3.1. Code Components Licensing Notice . . . . . . . . . . . . . 6 68 4. Device Addressing and Discovery . . . . . . . . . . . . . . . 8 69 4.1. pnfs_ff_device_addr . . . . . . . . . . . . . . . . . . . 8 70 4.2. Data Server Multipathing . . . . . . . . . . . . . . . . . 9 71 5. Flexible Files Layout . . . . . . . . . . . . . . . . . . . . 10 72 5.1. pnfs_ff_layout . . . . . . . . . . . . . . . . . . . . . . 11 73 5.2. Striping Topologies . . . . . . . . . . . . . . . . . . . 14 74 5.2.1. PFSP_SPARSE_STRIPING . . . . . . . . . . . . . . . . . 14 75 5.2.2. PFSP_DENSE_STRIPING . . . . . . . . . . . . . . . . . 15 76 5.2.3. PFSP_RAID_4 . . . . . . . . . . . . . . . . . . . . . 16 77 5.2.4. PFSP_RAID_5 . . . . . . . . . . . . . . . . . . . . . 16 78 5.2.5. PFSP_RAID_PQ . . . . . . . . . . . . . . . . . . . . . 17 79 5.2.6. RAID Usage and Implementation Notes . . . . . . . . . 18 80 5.3. Mirroring . . . . . . . . . . . . . . . . . . . . . . . . 18 81 6. Recovering from Client I/O Errors . . . . . . . . . . . . . . 18 82 7. Flexible Files Layout Return . . . . . . . . . . . . . . . . . 19 83 7.1. pflr_errno . . . . . . . . . . . . . . . . . . . . . . . . 20 84 7.2. pnfs_ff_ioerr . . . . . . . . . . . . . . . . . . . . . . 21 85 7.3. pnfs_ff_iostats . . . . . . . . . . . . . . . . . . . . . 22 86 7.4. pnfs_ff_layoutreturn . . . . . . . . . . . . . . . . . . . 23 87 8. Flexible Files Creation Layout Hint . . . . . . . . . . . . . 23 88 8.1. pnfs_ff_layouthint . . . . . . . . . . . . . . . . . . . . 24 89 9. Recalling Layouts . . . . . . . . . . . . . . . . . . . . . . 25 90 9.1. CB_RECALL_ANY . . . . . . . . . . . . . . . . . . . . . . 25 91 10. Client Fencing . . . . . . . . . . . . . . . . . . . . . . . . 26 92 11. Security Considerations . . . . . . . . . . . . . . . . . . . 26 93 12. Striping Topologies Extensibility . . . . . . . . . . . . . . 27 94 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 27 95 14. Normative References . . . . . . . . . . . . . . . . . . . . . 27 96 Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . . 28 97 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 29 99 1. Introduction 101 In pNFS, the file server returns typed layout structures that 102 describe where file data is located. There are different layouts for 103 different storage systems and methods of arranging data on storage 104 devices. This document defines the layout used with file-based data 105 servers that are accessed using the Network File System (NFS) 106 Protocol: NFSv3 (RFC1813 [1]), NFSv4 (RFC3530 [2]) and its newer 107 minor version - NFSv4.1 (RFC5661 [3]). 109 In contrast to the LAYOUT4_NFSV4_1_FILES layout type (RFC5661 [3]) 110 that also uses NFSv4.1 to access the data server, the Flexible Files 111 layout defines a model of device metadata and striping patterns that 112 is inspired by the object layout (RFC5664 [4]) that provide flexible, 113 per-file striping patterns and simple device information suitable 114 aggregating standalone NFS servers into a centrally managed pNFS 115 cluster. 117 To provide a global state model equivalent to that of the files 118 layout a back-end control protocol may be implemented between the 119 metadata server (MDS) and NFSv4.1 data servers (DSs). It is out of 120 scope for this document to specify the wire protocol of such a 121 protocol, yet the requirements for the protocol are specified in 122 RFC5661 [3]. The actual protocol definition of a standard back-end 123 control protocol conforming to these requirements is encouraged to be 124 specified within the IETF as a separate RFC. 126 1.1. Requirements Language 128 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 129 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 130 document are to be interpreted as described in RFC 2119 [5]. 132 2. Method of Operation 134 This section describes the semantics and format of flexible file- 135 based layouts for pNFS. Flexible file-based layouts use the 136 LAYOUT4_FLEX_FILES layout type. The LAYOUT4_FLEX_FILES type defines 137 striping data across multiple NFS Data Servers. 139 For the purpose of this discussion, we will distguish between user 140 files served by the metadata server, to be referred to as User Files; 141 vs. user files served by Data Servers, to be referred to as Component 142 Objects. 144 Component Objects are addressable by their NFS filehandle. Each 145 Component Object may store a whole User File or parts of it, in case 146 the User File is striped across multiple Component Objects. The 147 striping pattern is provided by pfl_striping_pattern as defined 148 below. 150 Data Servers may be accessed using different versions of the NFS 151 protocol. It is required that the server MUST use Data Servers of 152 the same NFS version and minor version for striping data within each 153 layout. The NFS version and minor version define the respective 154 security, state, and locking models to be used, as described below. 156 2.1. Security models 158 With NFSv3 Data Servers, the Metadata Server uses synthetic uids and 159 gids for the Component Objects, where the uid owner of the Component 160 Objects is allowed read/write access and the gid owner is allowed 161 read only access. As part of the layout, the client is provided with 162 the rpc credentials to be used (XREF pfcf_auth) to access the Object. 163 Fencing off clients is achieved by using SETATTR by the server to 164 change the uid and/or gid owners of the Component Objects to 165 implicitly revoke the outstanding rpc credentials. Note: it is 166 recommended to implement common access control methods at the Data 167 Server filesystem exports level to allow only the Metadata Server 168 root (super user) access to the Data Server, and to set the owner of 169 all directories holding Component Objects to the root user. This 170 security method, when using weak auth flavors such as AUTH_SYS, 171 provides a practical model to enforce access control and fence off 172 cooperative clients, but it can not protect against malicious 173 clients; hence it provides a level of security equivalent to NFSv3. 175 With NFSv4.x Data Servers, the Metadata Server sets the user and 176 group owners, mode bits, and ACL of the Component Objects to be the 177 same as the User File. And the client must autheticate with the Data 178 Server nad go through the same authorization process it would go 179 through via the Metadata Server. 181 2.2. State and Locking Models 183 User File OPEN, LOCK, and DELEGATION operations are always executed 184 only against the Metadata Server. 186 With NFSv4 Data Servers, the Metadata Server, in response to the 187 state changing operation, executes them against the respective 188 Component Objects on the Data Server(s). It then sends the Data 189 Server open stateid as part of the layout (XREF pfcf_stateid) and it 190 is then used by the client for executing READ/WRITE operations 191 against the Data Server. 193 Standalone NFSv4.1 Data Servers that do not return the 194 EXCHGID4_FLAG_USE_PNFS_DS flag to EXCHANGE_ID are used the same way 195 as NFSv4 Data Servers. 197 NFSv4.1 Clustered Data Servers that do identify themselves with the 198 EXCHGID4_FLAG_USE_PNFS_DS flag to EXCHANGE_ID use a back-end control 199 protocol as described in RFC5661 [3] to implement a global stateid 200 model as defined there. 202 3. XDR Description of the Flexible Files Layout Protocol 204 This document contains the external data representation (XDR [6]) 205 description of the NFSv4.1 flexible files layout protocol. The XDR 206 description is embedded in this document in a way that makes it 207 simple for the reader to extract into a ready-to-compile form. The 208 reader can feed this document into the following shell script to 209 produce the machine readable XDR description of the NFSv4.1 objects 210 layout protocol: 212 #!/bin/sh 213 grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??' 215 That is, if the above script is stored in a file called "extract.sh", 216 and this document is in a file called "spec.txt", then the reader can 217 do: 219 sh extract.sh < spec.txt > pnfs_flex_files_prot.x 221 The effect of the script is to remove leading white space from each 222 line, plus a sentinel sequence of "///". 224 The embedded XDR file header follows. Subsequent XDR descriptions, 225 with the sentinel sequence are embedded throughout the document. 227 Note that the XDR code contained in this document depends on types 228 from the NFSv4.1 nfs4_prot.x file ([7]). This includes both nfs 229 types that end with a 4, such as offset4, length4, etc., as well as 230 more generic types such as uint32_t and uint64_t. 232 3.1. Code Components Licensing Notice 234 The XDR description, marked with lines beginning with the sequence 235 "///", as well as scripts for extracting the XDR description are Code 236 Components as described in Section 4 of "Legal Provisions Relating to 237 IETF Documents" [8]. These Code Components are licensed according to 238 the terms of Section 4 of "Legal Provisions Relating to IETF 239 Documents". 241 /// /* 242 /// * Copyright (c) 2012 IETF Trust and the persons identified 243 /// * as authors of the code. All rights reserved. 244 /// * 245 /// * Redistribution and use in source and binary forms, with 246 /// * or without modification, are permitted provided that the 247 /// * following conditions are met: 248 /// * 249 /// * o Redistributions of source code must retain the above 250 /// * copyright notice, this list of conditions and the 251 /// * following disclaimer. 252 /// * 253 /// * o Redistributions in binary form must reproduce the above 254 /// * copyright notice, this list of conditions and the 255 /// * following disclaimer in the documentation and/or other 256 /// * materials provided with the distribution. 257 /// * 258 /// * o Neither the name of Internet Society, IETF or IETF 259 /// * Trust, nor the names of specific contributors, may be 260 /// * used to endorse or promote products derived from this 261 /// * software without specific prior written permission. 262 /// * 263 /// * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS 264 /// * AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED 265 /// * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 266 /// * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 267 /// * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO 268 /// * EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 269 /// * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 270 /// * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 271 /// * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 272 /// * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 273 /// * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 274 /// * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 275 /// * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 276 /// * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF 277 /// * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 278 /// * 279 /// * This code was derived from draft-bhalevy-nfsv4-flex-files-01. 280 [[RFC Editor: please insert RFC number if needed]] 281 /// * Please reproduce this note if possible. 282 /// */ 283 /// 284 /// /* 285 /// * pnfs_flex_files_prot.x 286 /// */ 287 /// 288 /// /* 289 /// * The following include statements are for example only. 290 /// * The actual XDR definition files are generated separately 291 /// * and independently and are likely to have a different name. 292 /// */ 293 /// %#include 294 /// %#include 295 /// 297 4. Device Addressing and Discovery 299 Data operations to a data server require the client to know the 300 network address of the data server. The GETDEVICEINFO NFSv4.1 301 operation is used by the client to retrieve that information. 303 4.1. pnfs_ff_device_addr 305 The pnfs_ff_device_addr data structure is returned by the server as 306 the storage-protocol-specific opaque field da_addr_body in the 307 device_addr4 structure by a successful GETDEVICEINFO operation [3]. 309 /// struct pnfs_ff_device_addr { 310 /// multipath_list4 pfda_netaddrs; 311 /// uint32_t pfda_version; 312 /// uint32_t pfda_minorversion; 313 /// pathname4 pfda_path; 314 /// }; 315 /// 317 The pfda_netaddrs field is used to locate the data server. It MUST 318 be set by the server to a list holding one or more of the device 319 network addresses. 321 pfda_version and pfda_minorversion represent the NFS protocol to be 322 used to access the data server. This layout specification defines 323 the semantics for pfda_versions 3 and 4. If pfda_version equals 3 324 then server MUST set pfda_minorversion to 0 and the client MUST 325 access the data server using the NFSv3 protocol (RFC1813 [1]). If 326 pfda_version equals 4 then the server MUST set pfda_minorversion to 327 either 0 or 1 and the client MUST access the data server using NFSv4 328 (RFC3530 [2]) or NFSv4.1 (RFC5661 [3]), respectively. 330 pfda_path MAY be set by the server to an exported path on the data 331 server for device identification. If provided, the path MUST exist 332 and be accessible to the client. If the path does not exist, the 333 client MUST ignore this device information and any layouts referring 334 to the respective deviceid until valid device information is 335 acquired. 337 4.2. Data Server Multipathing 339 The flexible file layout supports multipathing to multiple data 340 server addresses. Data-server-level multipathing is used for 341 bandwidth scaling via trunking and for higher availability of use in 342 the case of a data-server failure. Multipathing allows the client to 343 switch to another data server address which may be that of another 344 data server that is exporting the same data stripe unit, without 345 having to contact the metadata server for a new layout. 347 To support data server multipathing, pfda_netaddrs contains an array 348 of one more data server network addresses. This array (data type 349 multipath_list4) represents a list of data servers (each identified 350 by a network address), with the possibility that some data servers 351 will appear in the list multiple times. 353 The client is free to use any of the network addresses as a 354 destination to send data server requests. If some network addresses 355 are less optimal paths to the data than others, then the MDS SHOULD 356 NOT include those network addresses in pfda_netaddrs. If less 357 optimal network addresses exist to provide failover, the RECOMMENDED 358 method to offer the addresses is to provide them in a replacement 359 device-ID-to-device-address mapping, or a replacement device ID. 360 When a client finds no response from the data server using all 361 addresses available in pfda_netaddrs, it SHOULD send a GETDEVICEINFO 362 to attempt to replace the existing device-ID-to-device-address 363 mappings. If the MDS detects that all network paths represented by 364 pfda_netaddrs are unavailable, the MDS SHOULD send a 365 CB_NOTIFY_DEVICEID (if the client has indicated it wants device ID 366 notifications for changed device IDs) to change the device-ID-to- 367 device-address mappings to the available addresses. If the device ID 368 itself will be replaced, the MDS SHOULD recall all layouts with the 369 device ID, and thus force the client to get new layouts and device ID 370 mappings via LAYOUTGET and GETDEVICEINFO. 372 Generally, if two network addresses appear in pfda_netaddrs, they 373 will designate the same data server. When the data server is 374 accessed over NFSv4.1 or higher minor version the two data server 375 addresses will support the implementation of client ID or session 376 trunking (the latter is RECOMMENDED) as defined in RFC5661 [3]. The 377 two data server addresses will share the same server owner or major 378 ID of the server owner. It is not always necessary for the two data 379 server addresses to designate the same server with trunking being 380 used. For example, the data could be read-only, and the data consist 381 of exact replicas. 383 5. Flexible Files Layout 385 The layout4 type is defined in the [3] protocol as follows: 387 /// enum layouttype4 { 388 /// LAYOUT4_NFSV4_1_FILES = 1, 389 /// LAYOUT4_OSD2_OBJECTS = 2, 390 /// LAYOUT4_BLOCK_VOLUME = 3, 391 /// LAYOUT4_FLEX_FILES = 4 392 [[RFC Editor: please insert layouttype assigned by IANA]] 393 /// }; 394 /// 395 /// struct layout_content4 { 396 /// layouttype4 loc_type; 397 /// opaque loc_body<>; 398 /// }; 399 /// 400 /// struct layout4 { 401 /// offset4 lo_offset; 402 /// length4 lo_length; 403 /// layoutiomode4 lo_iomode; 404 /// layout_content4 lo_content; 405 /// }; 407 This document defines structure associated with the layouttype4 value 408 LAYOUT4_FLEX_FILES. NFSv4.1 RFC5661 [3] specifies the loc_body 409 structure as an XDR type "opaque". The opaque layout is 410 uninterpreted by the generic pNFS client layers, but obviously must 411 be interpreted by the flexible files layout driver. This section 412 defines the structure of this opaque value, pnfs_ff_layout4. 414 5.1. pnfs_ff_layout 416 /// enum pnfs_ff_striping_pattern { 417 /// PFSP_SPARSE_STRIPING = 1, 418 /// PFSP_DENSE_STRIPING = 2, 419 /// PFSP_RAID_4 = 4, 420 /// PFSP_RAID_5 = 5, 421 /// PFSP_RAID_PQ = 6 422 /// }; 423 /// 424 /// enum pnfs_ff_comp_type { 425 /// PNFS_FF_COMP_MISSING = 0, 426 /// PNFS_FF_COMP_PACKED = 1, 427 /// PNFS_FF_COMP_FULL = 2 428 /// }; 429 /// 430 /// struct pnfs_ff_comp_full { 431 /// deviceid4 pfcf_deviceid; 432 /// nfs_fh4 pfcf_fhandle; 433 /// stateid4 pfcf_stateid; 434 /// opaque_auth pfcf_auth; 435 /// uint32_t pfcf_metric; 436 /// }; 437 /// 438 /// union pnfs_ff_comp switch (pnfs_ff_comp_type pfc_type) { 439 /// case PNFS_FF_COMP_MISSING: 440 /// void; 441 /// 442 /// case PNFS_FF_COMP_PACKED: 443 /// deviceid4 pfcp_deviceid; 444 /// 445 /// case PNFS_FF_COMP_FULL: 446 /// pnfs_ff_comp_full pfcp_full; 447 /// }; 448 /// 449 /// struct pnfs_ff_layout { 450 /// pnfs_ff_striping_pattern pfl_striping_pattern; 451 /// uint32_t pfl_num_comps; 452 /// uint32_t pfl_mirror_cnt; 453 /// length4 pfl_stripe_unit; 454 /// nfs_fh4 pfl_global_fh; 455 /// uint32_t pfl_comps_index; 456 /// pnfs_ff_comp pfl_comps<>; 457 /// }; 458 /// 460 The pnfs_ff_layout structure specifies a layout over a set of 461 Component Objects. The layout parameterizes the algorithm that maps 462 the file's contents within the returned byte range, as represented by 463 lo_offset and lo_length, over the Component Objects. 465 It is possible that the file is concatenated from more than one 466 layout segment. Each layout segment MAY represent different striping 467 parameters, applying respectively only to the layout segment byte 468 range. 470 This section provides a brief introduction to the layout parameters. 471 See Section 5.2 for a more detailed description of the different 472 striping schemes and the respective interpretation of the layout 473 parameters for each striping scheme. 475 In addition to mapping data using simple striping schemes where loss 476 of a single component object results in data loss, the layout 477 parameters support mirroring and more advanced redundancy schemes 478 that protect against loss of component objects. pfl_striping_pattern 479 represents the algorithm to be used for mapping byte offsets in the 480 file address space to corresponding component objects in the returned 481 layout and byte offsets in the component's address space. 482 pfl_striping_pattern also represents methods for storing and 483 retrieving redundant data that can be used to recover from failure or 484 loss of component objects. 486 pfl_num_comps is the total number of component objects the file is 487 striped over within the returned byte range, not counting mirrored 488 components (See pfl_mirror_cnt below). Note that the server MAY grow 489 the file by adding more components to the stripe while clients hold 490 valid layouts until the file has reached its final stripe width. 492 pfl_mirror_cnt represents the number of mirrors each component in the 493 stripe has. If there is no mirroring then pfm_mirror_cnt MUST be 0. 494 Otherwise, the number of entries listed in pfl_comps MUST be a 495 multiple of (pfl_mirror_cnt+1). 497 pfl_stripe_unit is the number of bytes placed on one component before 498 advancing to the next one in the list of components. When the file 499 is striped over a single component object (pfl_num_comps equals to 500 1), the stripe unit has no use and the server SHOULD set it to the 501 server default value or to zero; otherwise, pfl_stripe_unit MUST NOT 502 be set to zero. 504 The pfl_comps field represents an array of component objects. The 505 data placement algorithm that maps file data onto component objects 506 assumes that each component object occurs exactly once in the array 507 of components. Therefore, component objects MUST appear in the 508 pfl_comps array only once. The components array may represent all 509 objects comprising the file, in which case pfl_comps_index is set to 510 zero and the number of entries in the pfl_comps array is equal to 511 pfl_num_comps * (pfl_mirror_cnt + 1). The server MAY return fewer 512 components than pfl_num_comps, provided that the returned byte range 513 represented by lo_offset and lo_count maps in whole into the set of 514 returned component objects. In this case, pfl_comps_index represents 515 the logical position of the returned components array, pfl_comps, 516 within the full array of components that comprise the file. 517 pfl_comps_index MUST be a multiple of (pfl_mirror_cnt + 1). 519 Each component object in the pfl_comps array is described by the 520 pnfs_ff_comp type. 522 When a component object is unavailable pfc_type is set to 523 PNFS_FF_COMP_MISSING and no other information for this component is 524 returned. When a data redundancy scheme is being used, as 525 represented by pfl_striping_pattern, the client MAY use a respective 526 data recovery algorithm to reconstruct data that is logically stored 527 on the missing component using user data and redundant data stored on 528 the available components in the containing stripe. 530 The server MUST set the same pfc_type for all available components to 531 either PNFS_FF_COMP_PACKED or PNFS_FF_COMP_FULL. 533 When NFSv4.1 Clustered Data Servers are used, the metadata server 534 implements the global state model where all data servers share the 535 same stateid and filehandle for the file. In such case, the client 536 MUST use the open, delegation, or lock stateid returned by the 537 metadata server for the file for accessing the Data Servers for READ 538 and WRITE; the global filehandle to be used by the client is provided 539 by pfl_global_fh. If the metadata server filehandle for the file is 540 being used by all data servers then pfl_global_fh MAY be set to an 541 empty filehandle. 543 pfcp_deviceid or pfcf_deviceid provide the deviceid of the data 544 server holding the Component Object. 546 When standalone data servers are used, either over NFSv4 or NFSv4.1, 547 pfl_global_fh SHOULD be set to an empty filehandle and it MUST be 548 ignored by the client and pfcf_fhandle provides the filehandle of the 549 Data Server file holding the Component Object, and pfcf_stateid 550 provides the stateid to be used by the client to access the file. 552 For NFSv3 Data Servers, pfcf_auth provides the rpc credentials to be 553 used by the client to access the Component Objects. For NFSv4.x Data 554 Servers, the server SHOULD use the AUTH_NONE flavor and a zero length 555 opaque body to minimize the returned structure length. The client 556 MUST ignore pfxf_auth in this case. 558 When pfl_mirror_cnt is not zero pfcf_metric indicates the distance to 559 the client the distance of the respective component object, otherwise 560 the server MUST set pfcf_metric to zero. When reading data, the 561 client the client is advised to read from components with the lowest 562 pfcf_metric. When there are several components with the same 563 pfcf_metric client implementations may implement a load distribution 564 algorithm to evenly distribute the read load across several devices 565 and by so provide larger bandwidth. 567 5.2. Striping Topologies 569 This section describes the different data mapping schemes in detail. 571 pnfs_ff_striping_pattern determines the algorithm and placement of 572 redundant data. This section defines the different redundancy 573 algorithms. Note: The term "RAID" (Redundant Array of Independent 574 Disks) is used in this document to represent an array of Component 575 Objects that store data for an individual User File. The objects are 576 stored on independent Data Servers. User File data is encoded and 577 striped across the array of Component Objects using algorithms 578 developed for block-based RAID systems. 580 5.2.1. PFSP_SPARSE_STRIPING 582 The mapping from the logical offset within a file (L) to the 583 Component Object C and object-specific offset O is direct and 584 straight forward as defined by the following equations: 586 L: logical offset into the file 588 W: stripe width 589 W = pfl_num_comps 591 S: number of bytes in a stripe 592 S = W * pfl_stripe_unit 594 N: stripe number 595 N = L / S 597 C: component index corresponding to L 598 C = (L % S) / pfl_stripe_unit 600 O: The component offset corresponding to L 601 O = L 603 Note that this computation does not accommodate the same object 604 appearing in the pfl_comps array multiple times. Therefore the 605 server may not return layouts with the same object appearing multiple 606 times. If needed the server can return multiple layout segments each 607 covering a single instance of the object. 609 PFSP_SPARSE_STRIPING means there is no parity data, so all bytes in 610 the component objects are data bytes located by the above equations 611 for C and O. If a component object is marked as PNFS_FF_COMP_MISSING, 612 the pNFS client MUST either return an I/O error if this component is 613 attempted to be read or, alternatively, it can retry the READ against 614 the pNFS server. 616 5.2.2. PFSP_DENSE_STRIPING 618 The mapping from the logical offset within a file (L) to the 619 component object C and object-specific offset O is defined by the 620 following equations: 622 L: logical offset into the file 624 W: stripe width 625 W = pfl_num_comps 627 S: number of bytes in a stripe 628 S = W * pfl_stripe_unit 630 N: stripe number 631 N = L / S 633 C: component index corresponding to L 634 C = (L % S) / pfl_stripe_unit 636 O: The component offset corresponding to L 637 O = (N * pfl_stripe_unit) + (L % pfl_stripe_unit) 639 Note that this computation does not accommodate the same object 640 appearing in the pfl_comps array multiple times. Therefore the 641 server may not return layouts with the same object appearing multiple 642 times. If needed the server can return multiple layout segments each 643 covering a single instance of the object. 645 PFSP_DENSE_STRIPING means there is no parity data, so all bytes in 646 the component objects are data bytes located by the above equations 647 for C and O. If a component object is marked as PNFS_FF_COMP_MISSING, 648 the pNFS client MUST either return an I/O error if this component is 649 attempted to be read or, alternatively, it can retry the READ against 650 the pNFS server. 652 Note that the layout depends on the file size, which the client 653 learns from the generic return parameters of LAYOUTGET, by doing 654 GETATTR commands to the Metadata Server. The client uses the file 655 size to decide if it should fill holes with zeros or return a short 656 read. Striping patterns can cause cases where Component Objects are 657 shorter than other components because a hole happens to correspond to 658 the last part of the Component Object. 660 5.2.3. PFSP_RAID_4 662 PFSP_RAID_4 means that the last component object in the stripe 663 contains parity information computed over the rest of the stripe with 664 an XOR operation. If a Component Object is unavailable, the client 665 can read the rest of the stripe units in the damaged stripe and 666 recompute the missing stripe unit by XORing the other stripe units in 667 the stripe. Or the client can replay the READ against the pNFS 668 server that will presumably perform the reconstructed read on the 669 client's behalf. 671 When parity is present in the file, then the number of parity devices 672 is taken into account in the above equations when calculating (D), 673 the number of data devices in a stripe, as follows: 675 P: number of parity devices in each stripe 676 P = 1 678 D: number of data devices in a stripe 679 D = W - P 681 I: parity device index 682 I = D 684 5.2.4. PFSP_RAID_5 686 PNFS_OBJ_RAID_5 means that the position of the parity data is rotated 687 on each stripe. In the first stripe, the last component holds the 688 parity. In the second stripe, the next-to-last component holds the 689 parity, and so on. In this scheme, all stripe units are rotated so 690 that I/O is evenly spread across objects as the file is read 691 sequentially. The rotated parity layout is illustrated here, with 692 hexadecimal numbers indicating the stripe unit. 694 0 1 2 P 695 4 5 P 3 696 8 P 6 7 697 P 9 a b 699 Note that the math for RAID_5 is similar to RAID_4 only that the 700 device indices for each stripe are rotated backwards. So start with 701 the equations above for RAID_4, then compute the rotation as 702 described below. 704 P: number of parity devices in each stripe 705 P = 1 707 PC: Parity Cycle 708 PC = W 710 R: The parity rotation index 711 (N is as computed in above equations for RAID-4) 712 R = N % PC 714 I: parity device index 715 I = (W + W - (R + 1) * P) % W 717 Cr: The rotated device index 718 (C is as computed in the above equations for RAID-4) 719 Cr = (W + C - (R * P)) % W 721 Note: W is added above to avoid negative numbers modulo math. 723 5.2.5. PFSP_RAID_PQ 725 PFSP_RAID_PQ is a double-parity scheme that uses the Reed-Solomon P+Q 726 encoding scheme [9]. In this layout, the last two component objects 727 hold the P and Q data, respectively. P is parity computed with XOR. 728 The Q computation is described in detail by Anvin [10]. The same 729 polynomial "x^8+x^4+x^3+x^2+1" and Galois field size of 2^8 are used 730 here. Clients may simply choose to read data through the metadata 731 server if two or more components are missing or damaged. 733 The equations given above for embedded parity can be used to map a 734 file offset to the correct component object by setting the number of 735 parity components (P) to 2 instead of 1 for RAID-5 and computing the 736 Parity Cycle length as the Lowest Common Multiple [11] of 737 pfl_num_comps and P, devided by P, as described below. Note: This 738 algorithm can be used also for RAID-5 where P=1. 740 P: number of parity devices 741 P = 2 743 PC: Parity cycle: 744 PC = LCM(W, P) / P 746 Q: The device index holding the Q component 747 (I is as computed in the above equations for RAID-5) 748 Qdev = (I + 1) % W 750 5.2.6. RAID Usage and Implementation Notes 752 RAID layouts with redundant data in their stripes require additional 753 serialization of updates to ensure correct operation. Otherwise, if 754 two clients simultaneously write to the same logical range of an 755 object, the result could include different data in the same ranges of 756 mirrored tuples, or corrupt parity information. It is the 757 responsibility of the metadata server to enforce serialization 758 requirements such as this. For example, the metadata server may do 759 so by not granting overlapping write layouts within mirrored objects. 761 Many alternative encoding schemes exist for P >= 2 [12]. These 762 involve P or Q equations different than those used in PFSP_RAID_PQ. 763 Thus, if one of these schemes is to be used in the future, a distinct 764 value must be added to pnfs_ff_striping_pattern for it. While Reed- 765 Solomon codes are well understood, recently discovered schemes such 766 as Liberation codes are more computationally efficient for small 767 group_widths, and Cauchy Reed-Solomon codes are more computationally 768 efficient for higher values of P. 770 5.3. Mirroring 772 The pfl_mirror_cnt is used to replicate a file by replicating its 773 Component Objects. If there is no mirroring, then pfs_mirror_cnt 774 MUST be 0. If pfl_mirror_cnt is greater than zero, then the size of 775 the pfl_comps array MUST be a multiple of (pfl_mirror_cnt + 1). 776 Thus, for a classic mirror on two objects, pfl_mirror_cnt is one. 777 Note that mirroring can be defined over any striping pattern. 779 Replicas are adjacent in the olo_components array, and the value C 780 produced by the above equations is not a direct index into the 781 pfl_comps array. Instead, the following equations determine the 782 replica component index RCi, where i ranges from 0 to pfl_mirror_cnt. 784 FW = size of pfl_comps array / (pfl_mirror_cnt+1) 786 C = component index for striping or two-level striping 787 as calculated using above equations 789 i ranges from 0 to pfl_mirror_cnt, inclusive 790 RCi = C * (pfl_mirror_cnt+1) + i 792 6. Recovering from Client I/O Errors 794 The pNFS client may encounter errors when directly accessing the Data 795 Servers. However, it is the responsibility of the Metadata Server to 796 recover from the I/O errors. When the LAYOUT4_FLEX_FILES layout type 797 is used, the client MUST report the I/O errors to the server at 798 LAYOUTRETURN time using the pflr_ioerr4 structure (see Section 7.1). 800 The metadata server analyzes the error and determines the required 801 recovery operations such as repairing any parity inconsistencies, 802 recovering media failures, or reconstructing missing objects. 804 The metadata server SHOULD recall any outstanding layouts to allow it 805 exclusive write access to the stripes being recovered and to prevent 806 other clients from hitting the same error condition. In these cases, 807 the server MUST complete recovery before handing out any new layouts 808 to the affected byte ranges. 810 Although it MAY be acceptable for the client to propagate a 811 corresponding error to the application that initiated the I/O 812 operation and drop any unwritten data, the client SHOULD attempt to 813 retry the original I/O operation by requesting a new layout using 814 LAYOUTGET and retry the I/O operation(s) using the new layout, or the 815 client MAY just retry the I/O operation(s) using regular NFS READ or 816 WRITE operations via the metadata server. The client SHOULD attempt 817 to retrieve a new layout and retry the I/O operation using the Data 818 Server first and only if the error persists, retry the I/O operation 819 via the metadata server. 821 7. Flexible Files Layout Return 823 layoutreturn_file4 is used in the LAYOUTRETURN operation to convey 824 layout-type specific information to the server. It is defined in the 825 NFSv4.1 [3] as follows: 827 struct layoutreturn_file4 { 828 offset4 lrf_offset; 829 length4 lrf_length; 830 stateid4 lrf_stateid; 831 /* layouttype4 specific data */ 832 opaque lrf_body<>; 833 }; 835 union layoutreturn4 switch(layoutreturn_type4 lr_returntype) { 836 case LAYOUTRETURN4_FILE: 837 layoutreturn_file4 lr_layout; 838 default: 839 void; 840 }; 842 struct LAYOUTRETURN4args { 843 /* CURRENT_FH: file */ 844 bool lora_reclaim; 845 layoutreturn_stateid lora_recallstateid; 846 layouttype4 lora_layout_type; 847 layoutiomode4 lora_iomode; 848 layoutreturn4 lora_layoutreturn; 849 }; 851 If the lora_layout_type layout type is LAYOUT4_FLEX_FILES, then the 852 lrf_body opaque value is defined by the pnfs_ff_layoutreturn4 type. 854 The pnfs_ff_layoutreturn4 type allows the client to report I/O error 855 information or layout usage statistics back to the metadata server as 856 defined below. 858 7.1. pflr_errno 860 /// enum pflr_errno { 861 /// PNFS_FF_ERR_EIO = 1, 862 /// PNFS_FF_ERR_NOT_FOUND = 2, 863 /// PNFS_FF_ERR_NO_SPACE = 3, 864 /// PNFS_FF_ERR_BAD_STATEID = 4, 865 /// PNFS_FF_ERR_NO_ACCESS = 5, 866 /// PNFS_FF_ERR_UNREACHABLE = 6, 867 /// PNFS_FF_ERR_RESOURCE = 7 868 /// }; 869 /// 871 pflr_errno4 is used to represent error types when read/write errors 872 are reported to the metadata server. The error codes serve as hints 873 to the metadata server that may help it in diagnosing the exact 874 reason for the error and in repairing it. 876 o PNFS_FF_ERR_EIO indicates the operation failed because the Data 877 Server experienced a failure trying to access the object. The 878 most common source of these errors is media errors, but other 879 internal errors might cause this as well. In this case, the 880 metadata server should go examine the broken object more closely; 881 hence, it should be used as the default error code. 883 o PNFS_FF_ERR_NOT_FOUND indicates the object ID specifies a 884 Component Object that does not exist on the Data Server. 886 o PNFS_FF_ERR_NO_SPACE indicates the operation failed because the 887 Data Server ran out of free capacity during the operation. 889 o PNFS_FF_ERR_BAD_STATEID indicates the stateid is not valid. 891 o PNFS_FF_ERR_NO_ACCESS indicates the rpc credentials do not allow 892 the requested operation. This may happen when the client is 893 fenced off. The client will need to return the layout and get a 894 new one with fresh credentials. 896 o PNFS_FF_ERR_UNREACHABLE indicates the client did not complete the 897 I/O operation at the Data Server due to a communication failure. 898 Whether or not the I/O operation was executed by the Data Server 899 is undetermined. 901 o PNFS_FF_ERR_RESOURCE indicates the client did not issue the I/O 902 operation due to a local problem on the initiator (i.e., client) 903 side, e.g., when running out of memory. The client MUST guarantee 904 that the Data Server WRITE operation was never sent. 906 7.2. pnfs_ff_ioerr 908 /// struct pnfs_ff_ioerr { 909 /// deviceid4 ioe_deviceid; 910 /// nfs_fh4 ioe_fhandle; 911 /// offset4 ioe_comp_offset; 912 /// length4 ioe_comp_length; 913 /// bool ioe_iswrite; 914 /// pnfs_ff_errno ioe_errno; 915 /// }; 916 /// 918 The pnfs_ff_ioerr4 structure is used to return error indications for 919 Component Objects that generated errors during data transfers. These 920 are hints to the metadata server that there are problems with that 921 object. For each error, "ioe_deviceid", "ioe_fhandle", 922 "ioe_comp_offset", and "ioe_comp_length" represent the Component 923 Object and byte range within the object in which the error occurred; 924 "ioe_iswrite" is set to "true" if the failed Data Server operation 925 was data modifying, and "ioe_errno" represents the type of error. 927 Component byte ranges in the optional pnfs_ff_ioerr4 structure are 928 used for recovering the object and MUST be set by the client to cover 929 all failed I/O operations to the component. 931 7.3. pnfs_ff_iostats 933 /// struct pnfs_ff_iostats { 934 /// offset4 ios_offset; 935 /// length4 ios_length; 936 /// uint32_t ios_duration; 937 /// uint32_t ios_rd_count; 938 /// uint64_t ios_rd_bytes; 939 /// uint32_t ios_wr_count; 940 /// uint64_t ios_wr_bytes; 941 /// }; 942 /// 944 With pNFS, the data transfers are performed directly between the pNFS 945 client and the data servers. Therefore, the metadata server has no 946 visibility to the I/O stream and cannot use any statistical 947 information about client I/O to optimize data storage location. 948 pnfs_ff_iostats4 MAY be used by the client to report I/O statistics 949 back to the metadata server upon returning the layout. Since it is 950 infeasible for the client to report every I/O that used the layout, 951 the client MAY identify "hot" byte ranges for which to report I/O 952 statistics. The definition and/or configuration mechanism of what is 953 considered "hot" and the size of the reported byte range is out of 954 the scope of this document. It is suggested for client 955 implementation to provide reasonable default values and an optional 956 run-time management interface to control these parameters. For 957 example, a client can define the default byte range resolution to be 958 1 MB in size and the thresholds for reporting to be 1 MB/second or 10 959 I/O operations per second. For each byte range, ios_offset and 960 ios_length represent the starting offset of the range and the range 961 length in bytes. ios_duration represents the number of seconds the 962 reported burst of I/O lasted. ios_rd_count, ios_rd_bytes, 963 ios_wr_count, and ios_wr_bytes represent, respectively, the number of 964 contiguous read and write I/Os and the respective aggregate number of 965 bytes transferred within the reported byte range. 967 7.4. pnfs_ff_layoutreturn 969 /// struct pnfs_ff_layoutreturn { 970 /// pnfs_ff_ioerr pflr_ioerr_report<>; 971 /// pnfs_ff_iostats pflr_iostats_report<>; 972 /// }; 973 /// 975 When object I/O operations failed, "pflr_ioerr_report<>" is used to 976 report these errors to the metadata server as an array of elements of 977 type pnfs_ff_ioerr4. Each element in the array represents an error 978 that occurred on the Compoent Object identified by . If no errors are to be reported, the size of the 980 pflr_ioerr_report<> array is set to zero. The client MAY also use 981 "pflr_iostats_report<>" to report a list of I/O statistics as an 982 array of elements of type pnfs_ff_iostats4. Each element in the 983 array represents statistics for a particular byte range. Byte ranges 984 are not guaranteed to be disjoint and MAY repeat or intersect. 986 8. Flexible Files Creation Layout Hint 988 The layouthint4 type is defined in the NFSv4.1 [3] as follows: 990 struct layouthint4 { 991 layouttype4 loh_type; 992 opaque loh_body<>; 993 }; 995 The layouthint4 structure is used by the client to pass a hint about 996 the type of layout it would like created for a particular file. If 997 the loh_type layout type is LAYOUT4_FLEX_FILES, then the loh_body 998 opaque value is defined by the pnfs_ff_layouthint type. 1000 8.1. pnfs_ff_layouthint 1002 /// union pnfs_ff_max_comps_hint switch (bool pfmx_valid) { 1003 /// case TRUE: 1004 /// uint32_t omx_max_comps; 1005 /// case FALSE: 1006 /// void; 1007 /// }; 1008 /// 1009 /// union pnfs_ff_stripe_unit_hint switch (bool pfsu_valid) { 1010 /// case TRUE: 1011 /// length4 osu_stripe_unit; 1012 /// case FALSE: 1013 /// void; 1014 /// }; 1015 /// 1016 /// union pnfs_ff_mirror_cnt_hint switch (bool pfmc_valid) { 1017 /// case TRUE: 1018 /// uint32_t omc_mirror_cnt; 1019 /// case FALSE: 1020 /// void; 1021 /// }; 1022 /// 1023 /// union pnfs_ff_striping_pattern_hint switch (bool pfsp_valid) { 1024 /// case TRUE: 1025 /// pnfs_ff_striping_pattern pfsp_striping_pattern; 1026 /// case FALSE: 1027 /// void; 1028 /// }; 1029 /// 1030 /// struct pnfs_ff_layouthint { 1031 /// pnfs_ff_max_comps_hint pflh_max_comps_hint; 1032 /// pnfs_ff_stripe_unit_hint pflh_stripe_unit_hint; 1033 /// pnfs_ff_mirror_cnt_hint pflh_mirror_cnt_hint; 1034 /// pnfs_ff_striping_pattern_hint pflh_striping_pattern_hint; 1035 /// }; 1036 /// 1038 This type conveys hints for the desired data map. All parameters are 1039 optional so the client can give values for only the parameters it 1040 cares about, e.g. it can provide a hint for the desired number of 1041 mirrored components, regardless of the striping pattern selected for 1042 the file. The server should make an attempt to honor the hints, but 1043 it can ignore any or all of them at its own discretion and without 1044 failing the respective CREATE operation. 1046 9. Recalling Layouts 1048 The Flexible Files metadata server should recall outstanding layouts 1049 in the following cases: 1051 o When the file's security policy changes, i.e., Access Control 1052 Lists (ACLs) or permission mode bits are set. 1054 o When the file's layout changes, rendering outstanding layouts 1055 invalid. 1057 o When there are sharing conflicts. For example, the server will 1058 issue stripe-aligned layout segments for RAID-5 objects. To 1059 prevent corruption of the file's parity, multiple clients must not 1060 hold valid write layouts for the same stripes. An outstanding 1061 READ/WRITE (RW) layout should be recalled when a conflicting 1062 LAYOUTGET is received from a different client for LAYOUTIOMODE4_RW 1063 and for a byte range overlapping with the outstanding layout 1064 segment. 1066 9.1. CB_RECALL_ANY 1068 The metadata server can use the CB_RECALL_ANY callback operation to 1069 notify the client to return some or all of its layouts. The NFSv4.1 1070 [3] defines the following types: 1072 const RCA4_TYPE_MASK_FF_LAYOUT_MIN = -2; 1073 const RCA4_TYPE_MASK_FF_LAYOUT_MAX = -1; 1074 [[RFC Editor: please insert assigned constants]] 1076 struct CB_RECALL_ANY4args { 1077 uint32_t craa_objects_to_keep; 1078 bitmap4 craa_type_mask; 1079 }; 1081 Typically, CB_RECALL_ANY will be used to recall client state when the 1082 server needs to reclaim resources. The craa_type_mask bitmap 1083 specifies the type of resources that are recalled and the 1084 craa_objects_to_keep value specifies how many of the recalled objects 1085 the client is allowed to keep. The Flexible Files layout type mask 1086 flags are defined as follows. They represent the iomode of the 1087 recalled layouts. In response, the client SHOULD return layouts of 1088 the recalled iomode that it needs the least, keeping at most 1089 craa_objects_to_keep object-based layouts. 1091 /// enum pnfs_ff_cb_recall_any_mask { 1092 /// PNFS_FF_RCA4_TYPE_MASK_READ = -2, 1093 /// PNFS_FF_RCA4_TYPE_MASK_RW = -1 1094 [[RFC Editor: please insert assigned constants]] 1095 /// }; 1096 /// 1098 The PNFS_FF_RCA4_TYPE_MASK_READ flag notifies the client to return 1099 layouts of iomode LAYOUTIOMODE4_READ. Similarly, the 1100 PNFS_FF_RCA4_TYPE_MASK_RW flag notifies the client to return layouts 1101 of iomode LAYOUTIOMODE4_RW. When both mask flags are set, the client 1102 is notified to return layouts of either iomode. 1104 10. Client Fencing 1106 In cases where clients are uncommunicative and their lease has 1107 expired or when clients fail to return recalled layouts within a 1108 lease period at the least (see "Recalling a Layout"[3]), the server 1109 MAY revoke client layouts and/or device address mappings and reassign 1110 these resources to other clients. To avoid data corruption, the 1111 metadata server MUST fence off the revoked clients from the 1112 respective objects as described in Section 2.1. 1114 11. Security Considerations 1116 The pNFS extension partitions the NFSv4 file system protocol into two 1117 parts, the control path and the data path (storage protocol). The 1118 control path contains all the new operations described by this 1119 extension; all existing NFSv4 security mechanisms and features apply 1120 to the control path. The combination of components in a pNFS system 1121 is required to preserve the security properties of NFSv4 with respect 1122 to an entity accessing data via a client, including security 1123 countermeasures to defend against threats that NFSv4 provides 1124 defenses for in environments where these threats are considered 1125 significant. 1127 The metadata server enforces the file access-control policy at 1128 LAYOUTGET time. The client should use suitable authorization 1129 credentials for getting the layout for the requested iomode (READ or 1130 RW) and the server verifies the permissions and ACL for these 1131 credentials, possibly returning NFS4ERR_ACCESS if the client is not 1132 allowed the requested iomode. If the LAYOUTGET operation succeeds 1133 the client receives, as part of the layout, a set of credentials 1134 allowing it I/O access to the specified objects corresponding to the 1135 requested iomode. When the client acts on I/O operations on behalf 1136 of its local users, it MUST authenticate and authorize the user by 1137 issuing respective OPEN and ACCESS calls to the metadata server, 1138 similar to having NFSv4 data delegations. If access is allowed, the 1139 client uses the corresponding (READ or RW) credentials to perform the 1140 I/O operations at the object storage devices. When the metadata 1141 server receives a request to change a file's permissions or ACL, it 1142 SHOULD recall all layouts for that file and it MUST fence off the 1143 clients holding outstanding layouts for the respective file by 1144 implicitly invalidating the outstanding credentials on all Component 1145 Objects comprising before committing to the new permissions and ACL. 1146 Doing this will ensure that clients re-authorize their layouts 1147 according to the modified permissions and ACL by requesting new 1148 layouts. Recalling the layouts in this case is courtesy of the 1149 server intended to prevent clients from getting an error on I/Os done 1150 after the client was fenced off. 1152 12. Striping Topologies Extensibility 1154 New striping topologies that are not specified in this document may 1155 be specified using @@@. These must be documented in the IETF by 1156 submitting an RFC augmenting this protocol provided that: o New 1157 striping topologies MUST be wire-protocol compatible with the 1158 Flexible Files Layout protocol as specified in this document. o Some 1159 members of the data structures specified here may be declared as 1160 optional or manadatory-not-to-be-used. o Upon acceptance by the IETF 1161 as a RFC, new striping topology constants MUST be registered with 1162 IANA (Section 13). 1164 13. IANA Considerations 1166 As described in NFSv4.1 [3], new layout type numbers have been 1167 assigned by IANA. This document defines the protocol associated with 1168 the existing layout type number, LAYOUT4_FLEX_FILES. 1170 A new IANA registry should be assigned to register new data map 1171 striping topologies described by the enumerated type: @@@. 1173 14. Normative References 1175 [1] IETF, "NFS Version 3 Protocol Specification", RFC 1813, 1176 June 1995. 1178 [2] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame, 1179 C., Eisler, M., and D. Noveck, "Network File System (NFS) 1180 version 4 Protocol", RFC 3530, April 2003. 1182 [3] Shepler, S., Ed., Eisler, M., Ed., and D. Noveck, Ed., "Network 1183 File System (NFS) Version 4 Minor Version 1 Protocol", 1184 RFC 5661, January 2010. 1186 [4] Halevy, B., Ed., Welch, B., Ed., and J. Zelenka, Ed., "Object- 1187 Based Parallel NFS (pNFS) Operations", RFC 5664, January 2010. 1189 [5] Bradner, S., "Key words for use in RFCs to Indicate Requirement 1190 Levels", BCP 14, RFC 2119, March 1997. 1192 [6] Eisler, M., "XDR: External Data Representation Standard", 1193 STD 67, RFC 4506, May 2006. 1195 [7] Shepler, S., Ed., Eisler, M., Ed., and D. Noveck, Ed., "Network 1196 File System (NFS) Version 4 Minor Version 1 External Data 1197 Representation Standard (XDR) Description", RFC 5662, 1198 January 2010. 1200 [8] IETF Trust, "Legal Provisions Relating to IETF Documents", 1201 November 2008, 1202 . 1204 [9] MacWilliams, F. and N. Sloane, "The Theory of Error-Correcting 1205 Codes, Part I", 1977. 1207 [10] Anvin, H., "The Mathematics of RAID-6", May 2009, 1208 . 1210 [11] The free encyclopedia, Wikipedia., "Least common multiple", 1211 April 2011, 1212 . 1214 [12] Plank, James S., and Luo, Jianqiang and Schuman, Catherine D. 1215 and Xu, Lihao and Wilcox-O'Hearn, Zooko, "A Performance 1216 Evaluation and Examination of Open-source Erasure Coding 1217 Libraries for Storage", 2007. 1219 Appendix A. Acknowledgments 1221 The pNFS Objects Layout was authored and revised by Brent Welch, Jim 1222 Zelenka, Benny Halevy, and Boaz Harrosh. 1224 Those who provided miscellaneous comments to early drafts of this 1225 document include: Matt W. Benjamin, Adam Emerson, Tom Haynes, J. 1226 Bruce Fields, and Lev Solomonov. 1228 Author's Address 1230 Benny Halevy 1231 PrimaryData, Inc. 1233 Email: bhalevy@primarydata.com 1234 URI: http://www.primarydata.com