idnits 2.17.1 draft-ietf-nfsv4-minorversion2-18.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 13, 2013) is 4061 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: '0' is mentioned on line 4012, but not defined -- Looks like a reference, but probably isn't: '32K' on line 4012 ** Obsolete normative reference: RFC 5661 (ref. '1') (Obsoleted by RFC 8881) -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '5' == Outdated reference: A later version (-35) exists of draft-ietf-nfsv4-rfc3530bis-25 -- Obsolete informational reference (is this intentional?): RFC 2616 (ref. '12') (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) == Outdated reference: A later version (-05) exists of draft-ietf-nfsv4-labreqs-03 -- Obsolete informational reference (is this intentional?): RFC 5226 (ref. '24') (Obsoleted by RFC 8126) Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 T. Haynes, Ed. 3 Internet-Draft NetApp 4 Intended status: Standards Track March 13, 2013 5 Expires: September 14, 2013 7 NFS Version 4 Minor Version 2 8 draft-ietf-nfsv4-minorversion2-18.txt 10 Abstract 12 This Internet-Draft describes NFS version 4 minor version two, 13 focusing mainly on the protocol extensions made from NFS version 4 14 minor version 0 and NFS version 4 minor version 1. Major extensions 15 introduced in NFS version 4 minor version two include: Server-side 16 Copy, Application I/O Advise, Space Reservations, Sparse Files, 17 Application Data Blocks, and Labeled NFS. 19 Requirements Language 21 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 22 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 23 document are to be interpreted as described in RFC 2119 [8]. 25 Status of this Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on September 14, 2013. 42 Copyright Notice 44 Copyright (c) 2013 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 60 1.1. The NFS Version 4 Minor Version 2 Protocol . . . . . . . 5 61 1.2. Scope of This Document . . . . . . . . . . . . . . . . . 5 62 1.3. NFSv4.2 Goals . . . . . . . . . . . . . . . . . . . . . . 5 63 1.4. Overview of NFSv4.2 Features . . . . . . . . . . . . . . 6 64 1.4.1. Server-side Copy . . . . . . . . . . . . . . . . . . . 6 65 1.4.2. Application I/O Advise . . . . . . . . . . . . . . . . 6 66 1.4.3. Sparse Files . . . . . . . . . . . . . . . . . . . . . 6 67 1.4.4. Space Reservation . . . . . . . . . . . . . . . . . . 6 68 1.4.5. Application Data Hole (ADH) Support . . . . . . . . . 6 69 1.4.6. Labeled NFS . . . . . . . . . . . . . . . . . . . . . 6 70 1.5. Differences from NFSv4.1 . . . . . . . . . . . . . . . . 7 71 2. Minor Versioning . . . . . . . . . . . . . . . . . . . . . . . 7 72 3. Server-side Copy . . . . . . . . . . . . . . . . . . . . . . . 10 73 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 10 74 3.2. Protocol Overview . . . . . . . . . . . . . . . . . . . . 10 75 3.2.1. Overview of Copy Operations . . . . . . . . . . . . . 11 76 3.2.2. Locking the Files . . . . . . . . . . . . . . . . . . 12 77 3.2.3. Intra-Server Copy . . . . . . . . . . . . . . . . . . 12 78 3.2.4. Inter-Server Copy . . . . . . . . . . . . . . . . . . 14 79 3.2.5. Server-to-Server Copy Protocol . . . . . . . . . . . . 17 80 3.3. Requirements for Operations . . . . . . . . . . . . . . . 18 81 3.3.1. netloc4 - Network Locations . . . . . . . . . . . . . 19 82 3.3.2. Offload Stateids . . . . . . . . . . . . . . . . . . . 19 83 3.4. Security Considerations . . . . . . . . . . . . . . . . . 20 84 3.4.1. Inter-Server Copy Security . . . . . . . . . . . . . . 20 85 4. Support for Application IO Hints . . . . . . . . . . . . . . . 28 86 5. Sparse Files . . . . . . . . . . . . . . . . . . . . . . . . . 28 87 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 28 88 5.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 29 89 5.3. New Operations . . . . . . . . . . . . . . . . . . . . . 29 90 5.3.1. READ_PLUS . . . . . . . . . . . . . . . . . . . . . . 30 91 5.3.2. WRITE_PLUS . . . . . . . . . . . . . . . . . . . . . . 30 92 6. Space Reservation . . . . . . . . . . . . . . . . . . . . . . 30 93 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 30 94 7. Application Data Hole Support . . . . . . . . . . . . . . . . 32 95 7.1. Generic Framework . . . . . . . . . . . . . . . . . . . . 33 96 7.1.1. Data Hole Representation . . . . . . . . . . . . . . . 34 97 7.1.2. Data Content . . . . . . . . . . . . . . . . . . . . . 34 98 7.2. An Example of Detecting Corruption . . . . . . . . . . . 35 99 7.3. Example of READ_PLUS . . . . . . . . . . . . . . . . . . 36 100 8. Labeled NFS . . . . . . . . . . . . . . . . . . . . . . . . . 37 101 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 37 102 8.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 38 103 8.3. MAC Security Attribute . . . . . . . . . . . . . . . . . 38 104 8.3.1. Delegations . . . . . . . . . . . . . . . . . . . . . 39 105 8.3.2. Permission Checking . . . . . . . . . . . . . . . . . 39 106 8.3.3. Object Creation . . . . . . . . . . . . . . . . . . . 39 107 8.3.4. Existing Objects . . . . . . . . . . . . . . . . . . . 40 108 8.3.5. Label Changes . . . . . . . . . . . . . . . . . . . . 40 109 8.4. pNFS Considerations . . . . . . . . . . . . . . . . . . . 40 110 8.5. Discovery of Server Labeled NFS Support . . . . . . . . . 41 111 8.6. MAC Security NFS Modes of Operation . . . . . . . . . . . 41 112 8.6.1. Full Mode . . . . . . . . . . . . . . . . . . . . . . 41 113 8.6.2. Guest Mode . . . . . . . . . . . . . . . . . . . . . . 43 114 8.7. Security Considerations . . . . . . . . . . . . . . . . . 43 115 9. Sharing change attribute implementation details with NFSv4 116 clients . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 117 9.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 44 118 10. Security Considerations . . . . . . . . . . . . . . . . . . . 44 119 11. Error Values . . . . . . . . . . . . . . . . . . . . . . . . . 44 120 11.1. Error Definitions . . . . . . . . . . . . . . . . . . . . 45 121 11.1.1. General Errors . . . . . . . . . . . . . . . . . . . . 45 122 11.1.2. Server to Server Copy Errors . . . . . . . . . . . . . 45 123 11.1.3. Labeled NFS Errors . . . . . . . . . . . . . . . . . . 46 124 11.2. New Operations and Their Valid Errors . . . . . . . . . . 46 125 11.3. New Callback Operations and Their Valid Errors . . . . . 49 126 12. New File Attributes . . . . . . . . . . . . . . . . . . . . . 50 127 12.1. New RECOMMENDED Attributes - List and Definition 128 References . . . . . . . . . . . . . . . . . . . . . . . 50 129 12.2. Attribute Definitions . . . . . . . . . . . . . . . . . . 51 130 13. Operations: REQUIRED, RECOMMENDED, or OPTIONAL . . . . . . . . 54 131 14. NFSv4.2 Operations . . . . . . . . . . . . . . . . . . . . . . 58 132 14.1. Operation 59: COPY - Initiate a server-side copy . . . . 58 133 14.2. Operation 60: OFFLOAD_ABORT - Cancel a server-side 134 copy . . . . . . . . . . . . . . . . . . . . . . . . . . 64 135 14.3. Operation 61: COPY_NOTIFY - Notify a source server of 136 a future copy . . . . . . . . . . . . . . . . . . . . . . 65 137 14.4. Operation 62: OFFLOAD_REVOKE - Revoke a destination 138 server's copy privileges . . . . . . . . . . . . . . . . 67 139 14.5. Operation 63: OFFLOAD_STATUS - Poll for status of a 140 server-side copy . . . . . . . . . . . . . . . . . . . . 68 141 14.6. Modification to Operation 42: EXCHANGE_ID - 142 Instantiate Client ID . . . . . . . . . . . . . . . . . . 69 143 14.7. Operation 64: WRITE_PLUS . . . . . . . . . . . . . . . . 70 144 14.8. Operation 67: IO_ADVISE - Application I/O access 145 pattern hints . . . . . . . . . . . . . . . . . . . . . . 75 146 14.9. Changes to Operation 51: LAYOUTRETURN . . . . . . . . . . 81 147 14.10. Operation 65: READ_PLUS . . . . . . . . . . . . . . . . . 84 148 14.11. Operation 66: SEEK . . . . . . . . . . . . . . . . . . . 89 149 15. NFSv4.2 Callback Operations . . . . . . . . . . . . . . . . . 90 150 15.1. Operation 15: CB_OFFLOAD - Report results of an 151 asynchronous operation . . . . . . . . . . . . . . . . . 90 152 16. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 91 153 17. References . . . . . . . . . . . . . . . . . . . . . . . . . . 92 154 17.1. Normative References . . . . . . . . . . . . . . . . . . 92 155 17.2. Informative References . . . . . . . . . . . . . . . . . 92 156 Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . . 94 157 Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 94 158 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 95 160 1. Introduction 162 1.1. The NFS Version 4 Minor Version 2 Protocol 164 The NFS version 4 minor version 2 (NFSv4.2) protocol is the third 165 minor version of the NFS version 4 (NFSv4) protocol. The first minor 166 version, NFSv4.0, is described in [9] and the second minor version, 167 NFSv4.1, is described in [1]. It follows the guidelines for minor 168 versioning that are listed in Section 11 of [9]. 170 As a minor version, NFSv4.2 is consistent with the overall goals for 171 NFSv4, but extends the protocol so as to better meet those goals, 172 based on experiences with NFSv4.1. In addition, NFSv4.2 has adopted 173 some additional goals, which motivate some of the major extensions in 174 NFSv4.2. 176 1.2. Scope of This Document 178 This document describes the NFSv4.2 protocol. With respect to 179 NFSv4.0 and NFSv4.1, this document does not: 181 o describe the NFSv4.0 or NFSv4.1 protocols, except where needed to 182 contrast with NFSv4.2 184 o modify the specification of the NFSv4.0 or NFSv4.1 protocols 186 o clarify the NFSv4.0 or NFSv4.1 protocols. I.e., any 187 clarifications made here apply to NFSv4.2 and neither of the prior 188 protocols 190 The full XDR for NFSv4.2 is presented in [2]. 192 1.3. NFSv4.2 Goals 194 The goal of the design of NFSv4.2 is to take common local file system 195 features and offer them remotely. These features might 197 o already be available on the servers, e.g., sparse files 199 o be under development as a new standard, e.g., SEEK_HOLE and 200 SEEK_DATA 202 o be used by clients with the servers via some proprietary means, 203 e.g., Labeled NFS 205 but the clients are not able to leverage them on the server within 206 the confines of the NFS protocol. 208 1.4. Overview of NFSv4.2 Features 210 1.4.1. Server-side Copy 212 A traditional file copy from one server to another results in the 213 data being put on the network twice - source to client and then 214 client to destination. New operations are introduced to allow the 215 client to authorize the two servers to interact directly. As this 216 copy can be lengthy, asynchronous support is also provided. 218 1.4.2. Application I/O Advise 220 Applications and clients want to advise the server as to expected I/O 221 behavior. Using IO_ADVISE (see Section 14.8) to communicate future 222 I/O behavior such as whether a file will be accessed sequentially or 223 randomly, and whether a file will or will not be accessed in the near 224 future, allows servers to optimize future I/O requests for a file by, 225 for example, prefetching or evicting data. This operation can be 226 used to support the posix_fadvise function as well as other 227 applications such as databases and video editors. 229 1.4.3. Sparse Files 231 Sparse files are ones which have unallocated data blocks as holes in 232 the file. Such holes are typically transferred as 0s during I/O. 233 READ_PLUS (see Section 14.10) allows a server to send back to the 234 client metadata describing the hole and WRITE_PLUS (see Section 14.7) 235 allows the client to punch holes into a file. In addition, SEEK (see 236 Section 14.11) is provided to scan for the next hole or data from a 237 given location. 239 1.4.4. Space Reservation 241 When a file is sparse, one concern applications have is ensuring that 242 there will always be enough data blocks available for the file during 243 future writes. A new attribute, space_reserved (see Section 12.2.4) 244 provides the client a guarantee that space will be available. 246 1.4.5. Application Data Hole (ADH) Support 248 Some applications treat a file as if it were a disk and as such want 249 to initialize (or format) the file image. We extend both READ_PLUS 250 and WRITE_PLUS to understand this metadata as a new form of a hole. 252 1.4.6. Labeled NFS 254 While both clients and servers can employ Mandatory Access Control 255 (MAC) security models to enforce data access, there has been no 256 protocol support to allow full interoperability. A new file object 257 attribute, sec_label (see Section 12.2.2) allows for the server to 258 store and enforce MAC labels. The format of the sec_label 259 accommodates any MAC security system. 261 1.5. Differences from NFSv4.1 263 In NFSv4.1, the only way to introduce new variants of an operation 264 was to introduce a new operation. I.e., READ becomes either READ2 or 265 READ_PLUS. With the use of discriminated unions as parameters to 266 such functions in NFSv4.2, it is possible to add a new arm in a 267 subsequent minor version. And it is also possible to move such an 268 operation from OPTIONAL/RECOMMENDED to REQUIRED. Forcing an 269 implementation to adopt each arm of a discriminated union at such a 270 time does not meet the spirit of the minor versioning rules. As 271 such, new arms of a discriminated union MUST follow the same 272 guidelines for minor versioning as operations in NFSv4.1 - i.e., they 273 may not be made REQUIRED. To support this, a new error code, 274 NFS4ERR_UNION_NOTSUPP, is introduced which allows the server to 275 communicate to the client that the operation is supported, but the 276 specific arm of the discriminated union is not. 278 2. Minor Versioning 280 To address the requirement of an NFS protocol that can evolve as the 281 need arises, the NFSv4 protocol contains the rules and framework to 282 allow for future minor changes or versioning. 284 The base assumption with respect to minor versioning is that any 285 future accepted minor version will be documented in one or more 286 Standards Track RFCs. Minor version 0 of the NFSv4 protocol is 287 represented by [9], minor version 1 by [1], and minor version 2 by 288 this document. The COMPOUND and CB_COMPOUND procedures support the 289 encoding of the minor version being requested by the client. 291 The following items represent the basic rules for the development of 292 minor versions. Note that a future minor version may modify or add 293 to the following rules as part of the minor version definition. 295 1. Procedures are not added or deleted. 297 To maintain the general RPC model, NFSv4 minor versions will not 298 add to or delete procedures from the NFS program. 300 2. Minor versions may add operations to the COMPOUND and 301 CB_COMPOUND procedures. 303 The addition of operations to the COMPOUND and CB_COMPOUND 304 procedures does not affect the RPC model. 306 * Minor versions may append attributes to the bitmap4 that 307 represents sets of attributes and to the fattr4 that 308 represents sets of attribute values. 310 This allows for the expansion of the attribute model to allow 311 for future growth or adaptation. 313 * Minor version X must append any new attributes after the last 314 documented attribute. 316 Since attribute results are specified as an opaque array of 317 per-attribute, XDR-encoded results, the complexity of adding 318 new attributes in the midst of the current definitions would 319 be too burdensome. 321 3. Minor versions must not modify the structure of an existing 322 operation's arguments or results. 324 Again, the complexity of handling multiple structure definitions 325 for a single operation is too burdensome. New operations should 326 be added instead of modifying existing structures for a minor 327 version. 329 This rule does not preclude the following adaptations in a minor 330 version: 332 * adding bits to flag fields, such as new attributes to 333 GETATTR's bitmap4 data type, and providing corresponding 334 variants of opaque arrays, such as a notify4 used together 335 with such bitmaps 337 * adding bits to existing attributes like ACLs that have flag 338 words 340 * extending enumerated types (including NFS4ERR_*) with new 341 values 343 * adding cases to a switched union 345 4. Note that when adding new cases to a switched union, a minor 346 version must not make new cases be REQUIRED. While the 347 encapsulating operation may be REQUIRED, the new cases (the 348 specific arm of the discriminated union) is not. The error code 349 NFS4ERR_UNION_NOTSUPP is used to notifify the client when the 350 server does not support such a case. 352 5. Minor versions must not modify the structure of existing 353 attributes. 355 6. Minor versions must not delete operations. 357 This prevents the potential reuse of a particular operation 358 "slot" in a future minor version. 360 7. Minor versions must not delete attributes. 362 8. Minor versions must not delete flag bits or enumeration values. 364 9. Minor versions may declare an operation MUST NOT be implemented. 366 Specifying that an operation MUST NOT be implemented is 367 equivalent to obsoleting an operation. For the client, it means 368 that the operation MUST NOT be sent to the server. For the 369 server, an NFS error can be returned as opposed to "dropping" 370 the request as an XDR decode error. This approach allows for 371 the obsolescence of an operation while maintaining its structure 372 so that a future minor version can reintroduce the operation. 374 1. Minor versions may declare that an attribute MUST NOT be 375 implemented. 377 2. Minor versions may declare that a flag bit or enumeration 378 value MUST NOT be implemented. 380 10. Minor versions may declare an operation to be DEPRECATED, which 381 indicates the plan to remove the operation in the next release. 382 Such a labeling does not effect whether the operation is 383 REQUIRED or RECOMMENDED or OPTIONAL. I.e., an operation may be 384 both REQUIRED for the given minor version and earmarked for MUST 385 NOT be implemented for the next release. Note that this two 386 minor version release approach is put in place to mitigate 387 design and implementation mistakes. As such, even if an 388 operation is marked DEPRECATED in a given minor release, it may 389 end up not being marked as MUST NOT implement in the next minor 390 version. 392 11. Minor versions may downgrade features (i.e., operations and 393 attributes) from REQUIRED to RECOMMENDED, or RECOMMENDED to 394 OPTIONAL. Also, if a feature was marked as DEPRECATED in a 395 prior minor version, it may be downgraded from REQUIRED to 396 OPTIONAL. 398 12. Minor versions may upgrade features from OPTIONAL to 399 RECOMMENDED, or RECOMMENDED to REQUIRED. 401 13. A client and server that support minor version X SHOULD support 402 minor versions 0 through X-1 as well. 404 14. Except for infrastructural changes, a minor version must not 405 introduce REQUIRED new features. 407 This rule allows for the introduction of new functionality and 408 forces the use of implementation experience before designating a 409 feature as REQUIRED. On the other hand, some classes of 410 features are infrastructural and have broad effects. Allowing 411 infrastructural features to be RECOMMENDED or OPTIONAL 412 complicates implementation of the minor version. 414 15. A client MUST NOT attempt to use a stateid, filehandle, or 415 similar returned object from the COMPOUND procedure with minor 416 version X for another COMPOUND procedure with minor version Y, 417 where X != Y. 419 3. Server-side Copy 421 3.1. Introduction 423 The server-side copy feature provides a mechanism for the NFS client 424 to perform a file copy on the server without the data being 425 transmitted back and forth over the network. Without this feature, 426 an NFS client copies data from one location to another by reading the 427 data from the server over the network, and then writing the data back 428 over the network to the server. Using this server-side copy 429 operation, the client is able to instruct the server to copy the data 430 locally without the data being sent back and forth over the network 431 unnecessarily. 433 If the source object and destination object are on different file 434 servers, the file servers will communicate with one another to 435 perform the copy operation. The server-to-server protocol by which 436 this is accomplished is not defined in this document. 438 3.2. Protocol Overview 440 The server-side copy offload operations support both intra-server and 441 inter-server file copies. An intra-server copy is a copy in which 442 the source file and destination file reside on the same server. In 443 an inter-server copy, the source file and destination file are on 444 different servers. In both cases, the copy may be performed 445 synchronously or asynchronously. 447 Throughout the rest of this document, we refer to the NFS server 448 containing the source file as the "source server" and the NFS server 449 to which the file is transferred as the "destination server". In the 450 case of an intra-server copy, the source server and destination 451 server are the same server. Therefore in the context of an intra- 452 server copy, the terms source server and destination server refer to 453 the single server performing the copy. 455 The operations described below are designed to copy files. Other 456 file system objects can be copied by building on these operations or 457 using other techniques. For example if the user wishes to copy a 458 directory, the client can synthesize a directory copy by first 459 creating the destination directory and then copying the source 460 directory's files to the new destination directory. If the user 461 wishes to copy a namespace junction [10] [11], the client can use the 462 ONC RPC Federated Filesystem protocol [11] to perform the copy. 463 Specifically the client can determine the source junction's 464 attributes using the FEDFS_LOOKUP_FSN procedure and create a 465 duplicate junction using the FEDFS_CREATE_JUNCTION procedure. 467 For the inter-server copy, the operations are defined to be 468 compatible with the traditional copy authentication approach. The 469 client and user are authorized at the source for reading. Then they 470 are authorized at the destination for writing. 472 3.2.1. Overview of Copy Operations 474 COPY_NOTIFY: For inter-server copies, the client sends this 475 operation to the source server to notify it of a future file copy 476 from a given destination server for the given user. 477 (Section 14.3) 479 OFFLOAD_REVOKE: Also for inter-server copies, the client sends this 480 operation to the source server to revoke permission to copy a file 481 for the given user. (Section 14.4) 483 COPY: Used by the client to request a file copy. (Section 14.1) 485 OFFLOAD_ABORT: Used by the client to abort an asynchronous file 486 copy. (Section 14.2) 488 OFFLOAD_STATUS: Used by the client to poll the status of an 489 asynchronous file copy. (Section 14.5) 491 CB_OFFLOAD: Used by the destination server to report the results of 492 an asynchronous file copy to the client. (Section 15.1) 494 3.2.2. Locking the Files 496 Both the source and destination file may need to be locked to protect 497 the content during the copy operations. A client can achieve this by 498 a combination of OPEN and LOCK operations. I.e., either share or 499 byte range locks might be desired. 501 3.2.3. Intra-Server Copy 503 To copy a file on a single server, the client uses a COPY operation. 504 The server may respond to the copy operation with the final results 505 of the copy or it may perform the copy asynchronously and deliver the 506 results using a CB_OFFLOAD operation callback. If the copy is 507 performed asynchronously, the client may poll the status of the copy 508 using OFFLOAD_STATUS or cancel the copy using OFFLOAD_ABORT. 510 A synchronous intra-server copy is shown in Figure 1. In this 511 example, the NFS server chooses to perform the copy synchronously. 512 The copy operation is completed, either successfully or 513 unsuccessfully, before the server replies to the client's request. 514 The server's reply contains the final result of the operation. 516 Client Server 517 + + 518 | | 519 |--- OPEN ---------------------------->| Client opens 520 |<------------------------------------/| the source file 521 | | 522 |--- OPEN ---------------------------->| Client opens 523 |<------------------------------------/| the destination file 524 | | 525 |--- COPY ---------------------------->| Client requests 526 |<------------------------------------/| a file copy 527 | | 528 |--- CLOSE --------------------------->| Client closes 529 |<------------------------------------/| the destination file 530 | | 531 |--- CLOSE --------------------------->| Client closes 532 |<------------------------------------/| the source file 533 | | 534 | | 535 Figure 1: A synchronous intra-server copy. 537 An asynchronous intra-server copy is shown in Figure 2. In this 538 example, the NFS server performs the copy asynchronously. The 539 server's reply to the copy request indicates that the copy operation 540 was initiated and the final result will be delivered at a later time. 541 The server's reply also contains a copy stateid. The client may use 542 this copy stateid to poll for status information (as shown) or to 543 cancel the copy using a OFFLOAD_ABORT. When the server completes the 544 copy, the server performs a callback to the client and reports the 545 results. 547 Client Server 548 + + 549 | | 550 |--- OPEN ---------------------------->| Client opens 551 |<------------------------------------/| the source file 552 | | 553 |--- OPEN ---------------------------->| Client opens 554 |<------------------------------------/| the destination file 555 | | 556 |--- COPY ---------------------------->| Client requests 557 |<------------------------------------/| a file copy 558 | | 559 | | 560 |--- OFFLOAD_STATUS ------------------>| Client may poll 561 |<------------------------------------/| for status 562 | | 563 | . | Multiple OFFLOAD_STATUS 564 | . | operations may be sent. 565 | . | 566 | | 567 |<-- CB_OFFLOAD -----------------------| Server reports results 568 |\------------------------------------>| 569 | | 570 |--- CLOSE --------------------------->| Client closes 571 |<------------------------------------/| the destination file 572 | | 573 |--- CLOSE --------------------------->| Client closes 574 |<------------------------------------/| the source file 575 | | 576 | | 578 Figure 2: An asynchronous intra-server copy. 580 3.2.4. Inter-Server Copy 582 A copy may also be performed between two servers. The copy protocol 583 is designed to accommodate a variety of network topologies. As shown 584 in Figure 3, the client and servers may be connected by multiple 585 networks. In particular, the servers may be connected by a 586 specialized, high speed network (network 192.0.2.0/24 in the diagram) 587 that does not include the client. The protocol allows the client to 588 setup the copy between the servers (over network 203.0.113.0/24 in 589 the diagram) and for the servers to communicate on the high speed 590 network if they choose to do so. 592 192.0.2.0/24 593 +-------------------------------------+ 594 | | 595 | | 596 | 192.0.2.18 | 192.0.2.56 597 +-------+------+ +------+------+ 598 | Source | | Destination | 599 +-------+------+ +------+------+ 600 | 203.0.113.18 | 203.0.113.56 601 | | 602 | | 603 | 203.0.113.0/24 | 604 +------------------+------------------+ 605 | 606 | 607 | 203.0.113.243 608 +-----+-----+ 609 | Client | 610 +-----------+ 612 Figure 3: An example inter-server network topology. 614 For an inter-server copy, the client notifies the source server that 615 a file will be copied by the destination server using a COPY_NOTIFY 616 operation. The client then initiates the copy by sending the COPY 617 operation to the destination server. The destination server may 618 perform the copy synchronously or asynchronously. 620 A synchronous inter-server copy is shown in Figure 4. In this case, 621 the destination server chooses to perform the copy before responding 622 to the client's COPY request. 624 An asynchronous copy is shown in Figure 5. In this case, the 625 destination server chooses to respond to the client's COPY request 626 immediately and then perform the copy asynchronously. 628 Client Source Destination 629 + + + 630 | | | 631 |--- OPEN --->| | Returns os1 632 |<------------------/| | 633 | | | 634 |--- COPY_NOTIFY --->| | 635 |<------------------/| | 636 | | | 637 |--- OPEN ---------------------------->| Returns os2 638 |<------------------------------------/| 639 | | | 640 |--- COPY ---------------------------->| 641 | | | 642 | | | 643 | |<----- read -----| 644 | |\--------------->| 645 | | | 646 | | . | Multiple reads may 647 | | . | be necessary 648 | | . | 649 | | | 650 | | | 651 |<------------------------------------/| Destination replies 652 | | | to COPY 653 | | | 654 |--- CLOSE --------------------------->| Release open state 655 |<------------------------------------/| 656 | | | 657 |--- CLOSE --->| | Release open state 658 |<------------------/| | 660 Figure 4: A synchronous inter-server copy. 662 Client Source Destination 663 + + + 664 | | | 665 |--- OPEN --->| | Returns os1 666 |<------------------/| | 667 | | | 668 |--- LOCK --->| | Optional, could be done 669 |<------------------/| | with a share lock 670 | | | 671 |--- COPY_NOTIFY --->| | Need to pass in 672 |<------------------/| | os1 or lock state 673 | | | 674 | | | 675 | | | 676 |--- OPEN ---------------------------->| Returns os2 677 |<------------------------------------/| 678 | | | 679 |--- LOCK ---------------------------->| Optional ... 680 |<------------------------------------/| 681 | | | 682 |--- COPY ---------------------------->| Need to pass in 683 |<------------------------------------/| os2 or lock state 684 | | | 685 | | | 686 | |<----- read -----| 687 | |\--------------->| 688 | | | 689 | | . | Multiple reads may 690 | | . | be necessary 691 | | . | 692 | | | 693 | | | 694 |--- OFFLOAD_STATUS ------------------>| Client may poll 695 |<------------------------------------/| for status 696 | | | 697 | | . | Multiple OFFLOAD_STATUS 698 | | . | operations may be sent 699 | | . | 700 | | | 701 | | | 702 | | | 703 |<-- CB_OFFLOAD -----------------------| Destination reports 704 |\------------------------------------>| results 705 | | | 706 |--- LOCKU --------------------------->| Only if LOCK was done 707 |<------------------------------------/| 708 | | | 709 |--- CLOSE --------------------------->| Release open state 710 |<------------------------------------/| 711 | | | 712 |--- LOCKU --->| | Only if LOCK was done 713 |<------------------/| | 714 | | | 715 |--- CLOSE --->| | Release open state 716 |<------------------/| | 717 | | | 719 Figure 5: An asynchronous inter-server copy. 721 3.2.5. Server-to-Server Copy Protocol 723 The source server and destination server are not required to use a 724 specific protocol to transfer the file data. The choice of what 725 protocol to use is ultimately the destination server's decision. 727 3.2.5.1. Using NFSv4.x as a Server-to-Server Copy Protocol 729 The destination server MAY use standard NFSv4.x (where x >= 1) to 730 read the data from the source server. If NFSv4.x is used for the 731 server-to-server copy protocol, the destination server can use the 732 filehandle contained in the COPY request with standard NFSv4.x 733 operations to read data from the source server. Specifically, the 734 destination server may use the NFSv4.x OPEN operation's CLAIM_FH 735 facility to open the file being copied and obtain an open stateid. 736 Using the stateid, the destination server may then use NFSv4.x READ 737 operations to read the file. 739 3.2.5.2. Using an alternative Server-to-Server Copy Protocol 741 In a homogeneous environment, the source and destination servers 742 might be able to perform the file copy extremely efficiently using 743 specialized protocols. For example the source and destination 744 servers might be two nodes sharing a common file system format for 745 the source and destination file systems. Thus the source and 746 destination are in an ideal position to efficiently render the image 747 of the source file to the destination file by replicating the file 748 system formats at the block level. Another possibility is that the 749 source and destination might be two nodes sharing a common storage 750 area network, and thus there is no need to copy any data at all, and 751 instead ownership of the file and its contents might simply be re- 752 assigned to the destination. To allow for these possibilities, the 753 destination server is allowed to use a server-to-server copy protocol 754 of its choice. 756 In a heterogeneous environment, using a protocol other than NFSv4.x 757 (e.g., HTTP [12] or FTP [13]) presents some challenges. In 758 particular, the destination server is presented with the challenge of 759 accessing the source file given only an NFSv4.x filehandle. 761 One option for protocols that identify source files with path names 762 is to use an ASCII hexadecimal representation of the source 763 filehandle as the file name. 765 Another option for the source server is to use URLs to direct the 766 destination server to a specialized service. For example, the 767 response to COPY_NOTIFY could include the URL 768 ftp://s1.example.com:9999/_FH/0x12345, where 0x12345 is the ASCII 769 hexadecimal representation of the source filehandle. When the 770 destination server receives the source server's URL, it would use 771 "_FH/0x12345" as the file name to pass to the FTP server listening on 772 port 9999 of s1.example.com. On port 9999 there would be a special 773 instance of the FTP service that understands how to convert NFS 774 filehandles to an open file descriptor (in many operating systems, 775 this would require a new system call, one which is the inverse of the 776 makefh() function that the pre-NFSv4 MOUNT service needs). 778 Authenticating and identifying the destination server to the source 779 server is also a challenge. Recommendations for how to accomplish 780 this are given in Section 3.4.1.2.4 and Section 3.4.1.4. 782 3.3. Requirements for Operations 784 The implementation of server-side copy is OPTIONAL by the client and 785 the server. However, in order to successfully copy a file, some 786 operations MUST be supported by the client and/or server. 788 If a client desires an intra-server file copy, then it MUST support 789 the COPY and CB_OFFLOAD operations. If COPY returns a stateid, then 790 the client MAY use the OFFLOAD_ABORT and OFFLOAD_STATUS operations. 792 If a client desires an inter-server file copy, then it MUST support 793 the COPY, COPY_NOTICE, and CB_OFFLOAD operations, and MAY use the 794 OFFLOAD_REVOKE operation. If COPY returns a stateid, then the client 795 MAY use the OFFLOAD_ABORT and OFFLOAD_STATUS operations. 797 If a server supports intra-server copy, then the server MUST support 798 the COPY operation. If a server's COPY operation returns a stateid, 799 then the server MUST also support these operations: CB_OFFLOAD, 800 OFFLOAD_ABORT, and OFFLOAD_STATUS. 802 If a source server supports inter-server copy, then the source server 803 MUST support all these operations: COPY_NOTIFY and OFFLOAD_REVOKE. 804 If a destination server supports inter-server copy, then the 805 destination server MUST support the COPY operation. If a destination 806 server's COPY operation returns a stateid, then the destination 807 server MUST also support these operations: CB_OFFLOAD, OFFLOAD_ABORT, 808 COPY_NOTIFY, OFFLOAD_REVOKE, and OFFLOAD_STATUS. 810 Each operation is performed in the context of the user identified by 811 the ONC RPC credential of its containing COMPOUND or CB_COMPOUND 812 request. For example, a OFFLOAD_ABORT operation issued by a given 813 user indicates that a specified COPY operation initiated by the same 814 user be canceled. Therefore a OFFLOAD_ABORT MUST NOT interfere with 815 a copy of the same file initiated by another user. 817 An NFS server MAY allow an administrative user to monitor or cancel 818 copy operations using an implementation specific interface. 820 3.3.1. netloc4 - Network Locations 822 The server-side copy operations specify network locations using the 823 netloc4 data type shown below: 825 enum netloc_type4 { 826 NL4_NAME = 0, 827 NL4_URL = 1, 828 NL4_NETADDR = 2 829 }; 830 union netloc4 switch (netloc_type4 nl_type) { 831 case NL4_NAME: utf8str_cis nl_name; 832 case NL4_URL: utf8str_cis nl_url; 833 case NL4_NETADDR: netaddr4 nl_addr; 834 }; 836 If the netloc4 is of type NL4_NAME, the nl_name field MUST be 837 specified as a UTF-8 string. The nl_name is expected to be resolved 838 to a network address via DNS, LDAP, NIS, /etc/hosts, or some other 839 means. If the netloc4 is of type NL4_URL, a server URL [3] 840 appropriate for the server-to-server copy operation is specified as a 841 UTF-8 string. If the netloc4 is of type NL4_NETADDR, the nl_addr 842 field MUST contain a valid netaddr4 as defined in Section 3.3.9 of 843 [1]. 845 When netloc4 values are used for an inter-server copy as shown in 846 Figure 3, their values may be evaluated on the source server, 847 destination server, and client. The network environment in which 848 these systems operate should be configured so that the netloc4 values 849 are interpreted as intended on each system. 851 3.3.2. Offload Stateids 853 A server may perform a copy offload operation asynchronously. An 854 asynchronous copy is tracked using a offload stateid. Copy offload 855 stateids are included in the COPY, OFFLOAD_ABORT, OFFLOAD_STATUS, and 856 CB_OFFLOAD operations. 858 Section 8.2.4 of [1] specifies that stateids are valid until either 859 (A) the client or server restart or (B) the client returns the 860 resource. 862 A offload stateid will be valid until either (A) the client or server 863 restarts or (B) the client returns the resource by issuing a 864 OFFLOAD_ABORT operation or the client replies to a CB_OFFLOAD 865 operation. 867 A offload stateid's seqid MUST NOT be 0. In the context of a copy 868 offload operation, it is ambiguous to indicate the most recent copy 869 offload operation using a stateid with seqid of 0. Therefore a copy 870 offload stateid with seqid of 0 MUST be considered invalid. 872 3.4. Security Considerations 874 The security considerations pertaining to NFSv4 [9] apply to this 875 chapter. 877 The standard security mechanisms provide by NFSv4 [9] may be used to 878 secure the protocol described in this chapter. 880 NFSv4 clients and servers supporting the inter-server copy operations 881 described in this chapter are REQUIRED to implement [4], including 882 the RPCSEC_GSSv3 privileges copy_from_auth and copy_to_auth. If the 883 server-to-server copy protocol is ONC RPC based, the servers are also 884 REQUIRED to implement the RPCSEC_GSSv3 privilege copy_confirm_auth. 885 These requirements to implement are not requirements to use. NFSv4 886 clients and servers are RECOMMENDED to use [4] to secure server-side 887 copy operations. 889 3.4.1. Inter-Server Copy Security 891 3.4.1.1. Requirements for Secure Inter-Server Copy 893 Inter-server copy is driven by several requirements: 895 o The specification MUST NOT mandate an inter-server copy protocol. 896 There are many ways to copy data. Some will be more optimal than 897 others depending on the identities of the source server and 898 destination server. For example the source and destination 899 servers might be two nodes sharing a common file system format for 900 the source and destination file systems. Thus the source and 901 destination are in an ideal position to efficiently render the 902 image of the source file to the destination file by replicating 903 the file system formats at the block level. In other cases, the 904 source and destination might be two nodes sharing a common storage 905 area network, and thus there is no need to copy any data at all, 906 and instead ownership of the file and its contents simply gets re- 907 assigned to the destination. 909 o The specification MUST provide guidance for using NFSv4.x as a 910 copy protocol. For those source and destination servers willing 911 to use NFSv4.x there are specific security considerations that 912 this specification can and does address. 914 o The specification MUST NOT mandate pre-configuration between the 915 source and destination server. Requiring that the source and 916 destination first have a "copying relationship" increases the 917 administrative burden. However the specification MUST NOT 918 preclude implementations that require pre-configuration. 920 o The specification MUST NOT mandate a trust relationship between 921 the source and destination server. The NFSv4 security model 922 requires mutual authentication between a principal on an NFS 923 client and a principal on an NFS server. This model MUST continue 924 with the introduction of COPY. 926 3.4.1.2. Inter-Server Copy with RPCSEC_GSSv3 928 When the client sends a COPY_NOTIFY to the source server to expect 929 the destination to attempt to copy data from the source server, it is 930 expected that this copy is being done on behalf of the principal 931 (called the "user principal") that sent the RPC request that encloses 932 the COMPOUND procedure that contains the COPY_NOTIFY operation. The 933 user principal is identified by the RPC credentials. A mechanism 934 that allows the user principal to authorize the destination server to 935 perform the copy in a manner that lets the source server properly 936 authenticate the destination's copy, and without allowing the 937 destination to exceed its authorization is necessary. 939 An approach that sends delegated credentials of the client's user 940 principal to the destination server is not used for the following 941 reasons. If the client's user delegated its credentials, the 942 destination would authenticate as the user principal. If the 943 destination were using the NFSv4 protocol to perform the copy, then 944 the source server would authenticate the destination server as the 945 user principal, and the file copy would securely proceed. However, 946 this approach would allow the destination server to copy other files. 947 The user principal would have to trust the destination server to not 948 do so. This is counter to the requirements, and therefore is not 949 considered. Instead an approach using RPCSEC_GSSv3 [4] privileges is 950 proposed. 952 One of the stated applications of the proposed RPCSEC_GSSv3 protocol 953 is compound client host and user authentication [+ privilege 954 assertion]. For inter-server file copy, we require compound NFS 955 server host and user authentication [+ privilege assertion]. The 956 distinction between the two is one without meaning. 958 RPCSEC_GSSv3 introduces the notion of privileges. We define three 959 privileges: 961 copy_from_auth: A user principal is authorizing a source principal 962 ("nfs@") to allow a destination principal ("nfs@ 963 ") to copy a file from the source to the destination. 964 This privilege is established on the source server before the user 965 principal sends a COPY_NOTIFY operation to the source server. 967 struct copy_from_auth_priv { 968 secret4 cfap_shared_secret; 969 netloc4 cfap_destination; 970 /* the NFSv4 user name that the user principal maps to */ 971 utf8str_mixed cfap_username; 972 /* equal to seq_num of rpc_gss_cred_vers_3_t */ 973 unsigned int cfap_seq_num; 974 }; 976 cfp_shared_secret is a secret value the user principal generates. 978 copy_to_auth: A user principal is authorizing a destination 979 principal ("nfs@") to allow it to copy a file from 980 the source to the destination. This privilege is established on 981 the destination server before the user principal sends a COPY 982 operation to the destination server. 984 struct copy_to_auth_priv { 985 /* equal to cfap_shared_secret */ 986 secret4 ctap_shared_secret; 987 netloc4 ctap_source; 988 /* the NFSv4 user name that the user principal maps to */ 989 utf8str_mixed ctap_username; 990 /* equal to seq_num of rpc_gss_cred_vers_3_t */ 991 unsigned int ctap_seq_num; 992 }; 994 ctap_shared_secret is a secret value the user principal generated 995 and was used to establish the copy_from_auth privilege with the 996 source principal. 998 copy_confirm_auth: A destination principal is confirming with the 999 source principal that it is authorized to copy data from the 1000 source on behalf of the user principal. When the inter-server 1001 copy protocol is NFSv4, or for that matter, any protocol capable 1002 of being secured via RPCSEC_GSSv3 (i.e., any ONC RPC protocol), 1003 this privilege is established before the file is copied from the 1004 source to the destination. 1006 struct copy_confirm_auth_priv { 1007 /* equal to GSS_GetMIC() of cfap_shared_secret */ 1008 opaque ccap_shared_secret_mic<>; 1009 /* the NFSv4 user name that the user principal maps to */ 1010 utf8str_mixed ccap_username; 1011 /* equal to seq_num of rpc_gss_cred_vers_3_t */ 1012 unsigned int ccap_seq_num; 1013 }; 1015 3.4.1.2.1. Establishing a Security Context 1017 When the user principal wants to COPY a file between two servers, if 1018 it has not established copy_from_auth and copy_to_auth privileges on 1019 the servers, it establishes them: 1021 o The user principal generates a secret it will share with the two 1022 servers. This shared secret will be placed in the 1023 cfap_shared_secret and ctap_shared_secret fields of the 1024 appropriate privilege data types, copy_from_auth_priv and 1025 copy_to_auth_priv. 1027 o An instance of copy_from_auth_priv is filled in with the shared 1028 secret, the destination server, and the NFSv4 user id of the user 1029 principal. It will be sent with an RPCSEC_GSS3_CREATE procedure, 1030 and so cfap_seq_num is set to the seq_num of the credential of the 1031 RPCSEC_GSS3_CREATE procedure. Because cfap_shared_secret is a 1032 secret, after XDR encoding copy_from_auth_priv, GSS_Wrap() (with 1033 privacy) is invoked on copy_from_auth_priv. The 1034 RPCSEC_GSS3_CREATE procedure's arguments are: 1036 struct { 1037 rpc_gss3_gss_binding *compound_binding; 1038 rpc_gss3_chan_binding *chan_binding_mic; 1039 rpc_gss3_assertion assertions<>; 1040 rpc_gss3_extension extensions<>; 1041 } rpc_gss3_create_args; 1043 The string "copy_from_auth" is placed in assertions[0].privs. The 1044 output of GSS_Wrap() is placed in extensions[0].data. The field 1045 extensions[0].critical is set to TRUE. The source server calls 1046 GSS_Unwrap() on the privilege, and verifies that the seq_num 1047 matches the credential. It then verifies that the NFSv4 user id 1048 being asserted matches the source server's mapping of the user 1049 principal. If it does, the privilege is established on the source 1050 server as: <"copy_from_auth", user id, destination>. The 1051 successful reply to RPCSEC_GSS3_CREATE has: 1053 struct { 1054 opaque handle<>; 1055 rpc_gss3_chan_binding *chan_binding_mic; 1056 rpc_gss3_assertion granted_assertions<>; 1057 rpc_gss3_assertion server_assertions<>; 1058 rpc_gss3_extension extensions<>; 1059 } rpc_gss3_create_res; 1061 The field "handle" is the RPCSEC_GSSv3 handle that the client will 1062 use on COPY_NOTIFY requests involving the source and destination 1063 server. granted_assertions[0].privs will be equal to 1064 "copy_from_auth". The server will return a GSS_Wrap() of 1065 copy_to_auth_priv. 1067 o An instance of copy_to_auth_priv is filled in with the shared 1068 secret, the source server, and the NFSv4 user id. It will be sent 1069 with an RPCSEC_GSS3_CREATE procedure, and so ctap_seq_num is set 1070 to the seq_num of the credential of the RPCSEC_GSS3_CREATE 1071 procedure. Because ctap_shared_secret is a secret, after XDR 1072 encoding copy_to_auth_priv, GSS_Wrap() is invoked on 1073 copy_to_auth_priv. The RPCSEC_GSS3_CREATE procedure's arguments 1074 are: 1076 struct { 1077 rpc_gss3_gss_binding *compound_binding; 1078 rpc_gss3_chan_binding *chan_binding_mic; 1079 rpc_gss3_assertion assertions<>; 1080 rpc_gss3_extension extensions<>; 1081 } rpc_gss3_create_args; 1083 The string "copy_to_auth" is placed in assertions[0].privs. The 1084 output of GSS_Wrap() is placed in extensions[0].data. The field 1085 extensions[0].critical is set to TRUE. After unwrapping, 1086 verifying the seq_num, and the user principal to NFSv4 user ID 1087 mapping, the destination establishes a privilege of 1088 <"copy_to_auth", user id, source>. The successful reply to 1089 RPCSEC_GSS3_CREATE has: 1091 struct { 1092 opaque handle<>; 1093 rpc_gss3_chan_binding *chan_binding_mic; 1094 rpc_gss3_assertion granted_assertions<>; 1095 rpc_gss3_assertion server_assertions<>; 1096 rpc_gss3_extension extensions<>; 1098 } rpc_gss3_create_res; 1100 The field "handle" is the RPCSEC_GSSv3 handle that the client will 1101 use on COPY requests involving the source and destination server. 1102 The field granted_assertions[0].privs will be equal to 1103 "copy_to_auth". The server will return a GSS_Wrap() of 1104 copy_to_auth_priv. 1106 3.4.1.2.2. Starting a Secure Inter-Server Copy 1108 When the client sends a COPY_NOTIFY request to the source server, it 1109 uses the privileged "copy_from_auth" RPCSEC_GSSv3 handle. 1110 cna_destination_server in COPY_NOTIFY MUST be the same as the name of 1111 the destination server specified in copy_from_auth_priv. Otherwise, 1112 COPY_NOTIFY will fail with NFS4ERR_ACCESS. The source server 1113 verifies that the privilege <"copy_from_auth", user id, destination> 1114 exists, and annotates it with the source filehandle, if the user 1115 principal has read access to the source file, and if administrative 1116 policies give the user principal and the NFS client read access to 1117 the source file (i.e., if the ACCESS operation would grant read 1118 access). Otherwise, COPY_NOTIFY will fail with NFS4ERR_ACCESS. 1120 When the client sends a COPY request to the destination server, it 1121 uses the privileged "copy_to_auth" RPCSEC_GSSv3 handle. 1122 ca_source_server in COPY MUST be the same as the name of the source 1123 server specified in copy_to_auth_priv. Otherwise, COPY will fail 1124 with NFS4ERR_ACCESS. The destination server verifies that the 1125 privilege <"copy_to_auth", user id, source> exists, and annotates it 1126 with the source and destination filehandles. If the client has 1127 failed to establish the "copy_to_auth" policy it will reject the 1128 request with NFS4ERR_PARTNER_NO_AUTH. 1130 If the client sends a OFFLOAD_REVOKE to the source server to rescind 1131 the destination server's copy privilege, it uses the privileged 1132 "copy_from_auth" RPCSEC_GSSv3 handle and the cra_destination_server 1133 in OFFLOAD_REVOKE MUST be the same as the name of the destination 1134 server specified in copy_from_auth_priv. The source server will then 1135 delete the <"copy_from_auth", user id, destination> privilege and 1136 fail any subsequent copy requests sent under the auspices of this 1137 privilege from the destination server. 1139 3.4.1.2.3. Securing ONC RPC Server-to-Server Copy Protocols 1141 After a destination server has a "copy_to_auth" privilege established 1142 on it, and it receives a COPY request, if it knows it will use an ONC 1143 RPC protocol to copy data, it will establish a "copy_confirm_auth" 1144 privilege on the source server, using nfs@ as the 1145 initiator principal, and nfs@ as the target principal. 1147 The value of the field ccap_shared_secret_mic is a GSS_VerifyMIC() of 1148 the shared secret passed in the copy_to_auth privilege. The field 1149 ccap_username is the mapping of the user principal to an NFSv4 user 1150 name ("user"@"domain" form), and MUST be the same as ctap_username 1151 and cfap_username. The field ccap_seq_num is the seq_num of the 1152 RPCSEC_GSSv3 credential used for the RPCSEC_GSS3_CREATE procedure the 1153 destination will send to the source server to establish the 1154 privilege. 1156 The source server verifies the privilege, and establishes a 1157 <"copy_confirm_auth", user id, destination> privilege. If the source 1158 server fails to verify the privilege, the COPY operation will be 1159 rejected with NFS4ERR_PARTNER_NO_AUTH. All subsequent ONC RPC 1160 requests sent from the destination to copy data from the source to 1161 the destination will use the RPCSEC_GSSv3 handle returned by the 1162 source's RPCSEC_GSS3_CREATE response. 1164 Note that the use of the "copy_confirm_auth" privilege accomplishes 1165 the following: 1167 o if a protocol like NFS is being used, with export policies, export 1168 policies can be overridden in case the destination server as-an- 1169 NFS-client is not authorized 1171 o manual configuration to allow a copy relationship between the 1172 source and destination is not needed. 1174 If the attempt to establish a "copy_confirm_auth" privilege fails, 1175 then when the user principal sends a COPY request to destination, the 1176 destination server will reject it with NFS4ERR_PARTNER_NO_AUTH. 1178 3.4.1.2.4. Securing Non ONC RPC Server-to-Server Copy Protocols 1180 If the destination won't be using ONC RPC to copy the data, then the 1181 source and destination are using an unspecified copy protocol. The 1182 destination could use the shared secret and the NFSv4 user id to 1183 prove to the source server that the user principal has authorized the 1184 copy. 1186 For protocols that authenticate user names with passwords (e.g., HTTP 1187 [12] and FTP [13]), the NFSv4 user id could be used as the user name, 1188 and an ASCII hexadecimal representation of the RPCSEC_GSSv3 shared 1189 secret could be used as the user password or as input into non- 1190 password authentication methods like CHAP [14]. 1192 3.4.1.3. Inter-Server Copy via ONC RPC but without RPCSEC_GSSv3 1194 ONC RPC security flavors other than RPCSEC_GSSv3 MAY be used with the 1195 server-side copy offload operations described in this chapter. In 1196 particular, host-based ONC RPC security flavors such as AUTH_NONE and 1197 AUTH_SYS MAY be used. If a host-based security flavor is used, a 1198 minimal level of protection for the server-to-server copy protocol is 1199 possible. 1201 In the absence of strong security mechanisms such as RPCSEC_GSSv3, 1202 the challenge is how the source server and destination server 1203 identify themselves to each other, especially in the presence of 1204 multi-homed source and destination servers. In a multi-homed 1205 environment, the destination server might not contact the source 1206 server from the same network address specified by the client in the 1207 COPY_NOTIFY. This can be overcome using the procedure described 1208 below. 1210 When the client sends the source server the COPY_NOTIFY operation, 1211 the source server may reply to the client with a list of target 1212 addresses, names, and/or URLs and assign them to the unique 1213 quadruple: . If the destination uses one of these target netlocs to contact 1215 the source server, the source server will be able to uniquely 1216 identify the destination server, even if the destination server does 1217 not connect from the address specified by the client in COPY_NOTIFY. 1218 The level of assurance in this identification depends on the 1219 unpredictability, strength and secrecy of the random number. 1221 For example, suppose the network topology is as shown in Figure 3. 1222 If the source filehandle is 0x12345, the source server may respond to 1223 a COPY_NOTIFY for destination 203.0.113.56 with the URLs: 1225 nfs://203.0.113.18//_COPY/FvhH1OKbu8VrxvV1erdjvR7N/203.0.113.56/ 1226 _FH/0x12345 1228 nfs://192.0.2.18//_COPY/FvhH1OKbu8VrxvV1erdjvR7N/203.0.113.56/_FH/ 1229 0x12345 1231 The name component after _COPY is 24 characters of base 64, more than 1232 enough to encode a 128 bit random number. 1234 The client will then send these URLs to the destination server in the 1235 COPY operation. Suppose that the 192.0.2.0/24 network is a high 1236 speed network and the destination server decides to transfer the file 1237 over this network. If the destination contacts the source server 1238 from 192.0.2.56 over this network using NFSv4.1, it does the 1239 following: 1241 COMPOUND { PUTROOTFH, LOOKUP "_COPY" ; LOOKUP 1242 "FvhH1OKbu8VrxvV1erdjvR7N" ; LOOKUP "203.0.113.56"; LOOKUP "_FH" ; 1243 OPEN "0x12345" ; GETFH } 1245 Provided that the random number is unpredictable and has been kept 1246 secret by the parties involved, the source server will therefore know 1247 that these NFSv4.x operations are being issued by the destination 1248 server identified in the COPY_NOTIFY. This random number technique 1249 only provides initial authentication of the destination server, and 1250 cannot defend against man-in-the-middle attacks after authentication 1251 or an eavesdropper that observes the random number on the wire. 1252 Other secure communication techniques (e.g., IPsec) are necessary to 1253 block these attacks. 1255 3.4.1.4. Inter-Server Copy without ONC RPC and RPCSEC_GSSv3 1257 The same techniques as Section 3.4.1.3, using unique URLs for each 1258 destination server, can be used for other protocols (e.g., HTTP [12] 1259 and FTP [13]) as well. 1261 4. Support for Application IO Hints 1263 Applications can issue client I/O hints via posix_fadvise() [5] to 1264 the NFS client. While this can help the NFS client optimize I/O and 1265 caching for a file, it does not allow the NFS server and its exported 1266 file system to do likewise. We add an IO_ADVISE procedure 1267 (Section 14.8) to communicate the client file access patterns to the 1268 NFS server. The NFS server upon receiving a IO_ADVISE operation MAY 1269 choose to alter its I/O and caching behavior, but is under no 1270 obligation to do so. 1272 Application specific NFS clients such as those used by hypervisors 1273 and databases can also leverage application hints to communicate 1274 their specialized requirements. 1276 5. Sparse Files 1278 5.1. Introduction 1280 A sparse file is a common way of representing a large file without 1281 having to utilize all of the disk space for it. Consequently, a 1282 sparse file uses less physical space than its size indicates. This 1283 means the file contains 'holes', byte ranges within the file that 1284 contain no data. Most modern file systems support sparse files, 1285 including most UNIX file systems and NTFS, but notably not Apple's 1286 HFS+. Common examples of sparse files include Virtual Machine (VM) 1287 OS/disk images, database files, log files, and even checkpoint 1288 recovery files most commonly used by the HPC community. 1290 If an application reads a hole in a sparse file, the file system must 1291 return all zeros to the application. For local data access there is 1292 little penalty, but with NFS these zeroes must be transferred back to 1293 the client. If an application uses the NFS client to read data into 1294 memory, this wastes time and bandwidth as the application waits for 1295 the zeroes to be transferred. 1297 A sparse file is typically created by initializing the file to be all 1298 zeros - nothing is written to the data in the file, instead the hole 1299 is recorded in the metadata for the file. So a 8G disk image might 1300 be represented initially by a couple hundred bits in the inode and 1301 nothing on the disk. If the VM then writes 100M to a file in the 1302 middle of the image, there would now be two holes represented in the 1303 metadata and 100M in the data. 1305 Two new operations WRITE_PLUS (Section 14.7) and READ_PLUS 1306 (Section 14.10) are introduced. WRITE_PLUS allows for the creation 1307 of a sparse file and for hole punching. An application might want to 1308 zero out a range of the file. READ_PLUS supports all the features of 1309 READ but includes an extension to support sparse pattern files 1310 (Section 7.1.2). READ_PLUS is guaranteed to perform no worse than 1311 READ, and can dramatically improve performance with sparse files. 1312 READ_PLUS does not depend on pNFS protocol features, but can be used 1313 by pNFS to support sparse files. 1315 5.2. Terminology 1317 Regular file: An object of file type NF4REG or NF4NAMEDATTR. 1319 Sparse file: A Regular file that contains one or more Holes. 1321 Hole: A byte range within a Sparse file that contains regions of all 1322 zeroes. For block-based file systems, this could also be an 1323 unallocated region of the file. 1325 Hole Threshold: The minimum length of a Hole as determined by the 1326 server. If a server chooses to define a Hole Threshold, then it 1327 would not return hole information about holes with a length 1328 shorter than the Hole Threshold. 1330 5.3. New Operations 1332 READ_PLUS and WRITE_PLUS are new variants of the NFSv4.1 READ and 1333 WRITE operations [1]. Besides being able to support all of the data 1334 semantics of those operations, they can also be used by the client 1335 and server to efficiently transfer both holes and ADHs (see 1336 Section 7.1.1). As both READ and WRITE are inefficient for transfer 1337 of sparse sections of the file, they are marked as OBSOLESCENT in 1338 NFSv4.2. Instead, a client should utilize READ_PLUS and WRITE_PLUS. 1339 Note that as the client has no a priori knowledge of whether either 1340 an ADH or a hole is present or not, if it supports these operations 1341 and so does the server, then it should always use these operations. 1343 5.3.1. READ_PLUS 1345 For holes, READ_PLUS extends the response to avoid returning data for 1346 portions of the file which are initialized and contain no backing 1347 store. Additionally it will do so if the result would appear to be a 1348 hole. I.e., if the result was a data block composed entirely of 1349 zeros, then it is easier to return a hole. Returning data blocks of 1350 uninitialized data wastes computational and network resources, thus 1351 reducing performance. For ADHs, READ_PLUS is used to return the 1352 metadata describing the portions of the file which are initialized 1353 and contain no backing store. 1355 If the client sends a READ operation, it is explicitly stating that 1356 it is neither supporting sparse files nor ADHs. So if a READ occurs 1357 on a sparse ADH or file, then the server must expand such data to be 1358 raw bytes. If a READ occurs in the middle of a hole or ADH, the 1359 server can only send back bytes starting from that offset. In 1360 contrast, if a READ_PLUS occurs in the middle of a hole or ADH, the 1361 server can send back a range which starts before the offset and 1362 extends past the range. 1364 5.3.2. WRITE_PLUS 1366 WRITE_PLUS can be used to either hole punch or initialize ADHs. For 1367 either purpose, the client can avoid the transfer of a repetitive 1368 pattern across the network. If the filesystem on the server does not 1369 supports sparse files, the WRITE_PLUS operation may return the result 1370 asynchronously via the CB_OFFLOAD operation. As a hole punch may 1371 entail deallocating data blocks, even if the filesystem supports 1372 sparse files, it may still have to return the result via CB_OFFLOAD. 1374 6. Space Reservation 1376 6.1. Introduction 1378 Applications such as hypervisors want to be able to reserve space for 1379 a file, report the amount of actual disk space a file occupies, and 1380 freeup the backing space of a file when it is not required. In 1381 virtualized environments, virtual disk files are often stored on NFS 1382 mounted volumes. Since virtual disk files represent the hard disks 1383 of virtual machines, hypervisors often have to guarantee certain 1384 properties for the file. 1386 One such example is space reservation. When a hypervisor creates a 1387 virtual disk file, it often tries to preallocate the space for the 1388 file so that there are no future allocation related errors during the 1389 operation of the virtual machine. Such errors prevent a virtual 1390 machine from continuing execution and result in downtime. 1392 Currently, in order to achieve such a guarantee, applications zero 1393 the entire file. The initial zeroing allocates the backing blocks 1394 and all subsequent writes are overwrites of already allocated blocks. 1395 This approach is not only inefficient in terms of the amount of I/O 1396 done, it is also not guaranteed to work on file systems that are log 1397 structured or deduplicated. An efficient way of guaranteeing space 1398 reservation would be beneficial to such applications. 1400 We define a "reservation" as being the combination of the 1401 space_reserved attribute (see Section 12.2.4) and the size attribute 1402 (see Section 5.8.1.5 of [1]). If space_reserved attribute is set on 1403 a file, it is guaranteed that writes that do not grow the file past 1404 the size will not fail with NFS4ERR_NOSPC. Once the size is changed, 1405 then the reservation is changed to that new size. 1407 Another useful feature is the ability to report the number of blocks 1408 that would be freed when a file is deleted. Currently, NFS reports 1409 two size attributes: 1411 size The logical file size of the file. 1413 space_used The size in bytes that the file occupies on disk 1415 While these attributes are sufficient for space accounting in 1416 traditional file systems, they prove to be inadequate in modern file 1417 systems that support block sharing. In such file systems, multiple 1418 inodes can point to a single block with a block reference count to 1419 guard against premature freeing. Having a way to tell the number of 1420 blocks that would be freed if the file was deleted would be useful to 1421 applications that wish to migrate files when a volume is low on 1422 space. 1424 Since virtual disks represent a hard drive in a virtual machine, a 1425 virtual disk can be viewed as a file system within a file. Since not 1426 all blocks within a file system are in use, there is an opportunity 1427 to reclaim blocks that are no longer in use. A call to deallocate 1428 blocks could result in better space efficiency. Lesser space MAY be 1429 consumed for backups after block deallocation. 1431 The following operations and attributes can be used to resolve this 1432 issues: 1434 space_reserved This attribute specifies that writes to the reserved 1435 area of the file will not fail with NFS4ERR_NOSPACE. 1437 space_freed This attribute specifies the space freed when a file is 1438 deleted, taking block sharing into consideration. 1440 WRITE_PLUS This operation zeroes and/or deallocates the blocks 1441 backing a region of the file. 1443 If space_used of a file is interpreted to mean the size in bytes of 1444 all disk blocks pointed to by the inode of the file, then shared 1445 blocks get double counted, over-reporting the space utilization. 1446 This also has the adverse effect that the deletion of a file with 1447 shared blocks frees up less than space_used bytes. 1449 On the other hand, if space_used is interpreted to mean the size in 1450 bytes of those disk blocks unique to the inode of the file, then 1451 shared blocks are not counted in any file, resulting in under- 1452 reporting of the space utilization. 1454 For example, two files A and B have 10 blocks each. Let 6 of these 1455 blocks be shared between them. Thus, the combined space utilized by 1456 the two files is 14 * BLOCK_SIZE bytes. In the former case, the 1457 combined space utilization of the two files would be reported as 20 * 1458 BLOCK_SIZE. However, deleting either would only result in 4 * 1459 BLOCK_SIZE being freed. Conversely, the latter interpretation would 1460 report that the space utilization is only 8 * BLOCK_SIZE. 1462 Adding another size attribute, space_freed (see Section 12.2.5), is 1463 helpful in solving this problem. space_freed is the number of blocks 1464 that are allocated to the given file that would be freed on its 1465 deletion. In the example, both A and B would report space_freed as 4 1466 * BLOCK_SIZE and space_used as 10 * BLOCK_SIZE. If A is deleted, B 1467 will report space_freed as 10 * BLOCK_SIZE as the deletion of B would 1468 result in the deallocation of all 10 blocks. 1470 The addition of this problem does not solve the problem of space 1471 being over-reported. However, over-reporting is better than under- 1472 reporting. 1474 7. Application Data Hole Support 1476 At the OS level, files are contained on disk blocks. Applications 1477 are also free to impose structure on the data contained in a file and 1478 we can define an Application Data Block (ADB) to be such a structure. 1479 From the application's viewpoint, it only wants to handle ADBs and 1480 not raw bytes (see [15]). An ADB is typically comprised of two 1481 sections: a header and data. The header describes the 1482 characteristics of the block and can provide a means to detect 1483 corruption in the data payload. The data section is typically 1484 initialized to all zeros. 1486 The format of the header is application specific, but there are two 1487 main components typically encountered: 1489 1. A logical block number which allows the application to determine 1490 which data block is being referenced. This is useful when the 1491 client is not storing the blocks in contiguous memory. 1493 2. Fields to describe the state of the ADB and a means to detect 1494 block corruption. For both pieces of data, a useful property is 1495 that allowed values be unique in that if passed across the 1496 network, corruption due to translation between big and little 1497 endian architectures are detectable. For example, 0xF0DEDEF0 has 1498 the same bit pattern in both architectures. 1500 Applications already impose structures on files [15] and detect 1501 corruption in data blocks [16]. What they are not able to do is 1502 efficiently transfer and store ADBs. To initialize a file with ADBs, 1503 the client must send the full ADB to the server and that must be 1504 stored on the server. 1506 In this section, we are going to define an Application Data Hole 1507 (ADH), which is a generic framework for transferring the ADB, present 1508 one approach to detecting corruption in a given ADH implementation, 1509 and describe the model for how the client and server can support 1510 efficient initialization of ADHs, reading of ADH holes, punching ADH 1511 holes in a file, and space reservation. We define the ADHN to be the 1512 Application Data Hole Number, which is the logical block number 1513 discussed earlier. 1515 7.1. Generic Framework 1517 We want the representation of the ADH to be flexible enough to 1518 support many different applications. The most basic approach is no 1519 imposition of a block at all, which means we are working with the raw 1520 bytes. Such an approach would be useful for storing holes, punching 1521 holes, etc. In more complex deployments, a server might be 1522 supporting multiple applications, each with their own definition of 1523 the ADH. One might store the ADHN at the start of the block and then 1524 have a guard pattern to detect corruption [17]. The next might store 1525 the ADHN at an offset of 100 bytes within the block and have no guard 1526 pattern at all, i.e., existing applications might already have well 1527 defined formats for their data blocks. 1529 The guard pattern can be used to represent the state of the block, to 1530 protect against corruption, or both. Again, it needs to be able to 1531 be placed anywhere within the ADH. 1533 We need to be able to represent the starting offset of the block and 1534 the size of the block. Note that nothing prevents the application 1535 from defining different sized blocks in a file. 1537 7.1.1. Data Hole Representation 1539 struct app_data_hole4 { 1540 offset4 adh_offset; 1541 length4 adh_block_size; 1542 length4 adh_block_count; 1543 length4 adh_reloff_blocknum; 1544 count4 adh_block_num; 1545 length4 adh_reloff_pattern; 1546 opaque adh_pattern<>; 1547 }; 1549 The app_data_hole4 structure captures the abstraction presented for 1550 the ADH. The additional fields present are to allow the transmission 1551 of adh_block_count ADHs at one time. We also use adh_block_num to 1552 convey the ADHN of the first block in the sequence. Each ADH will 1553 contain the same adh_pattern string. 1555 As both adh_block_num and adh_pattern are optional, if either 1556 adh_reloff_pattern or adh_reloff_blocknum is set to NFS4_UINT64_MAX, 1557 then the corresponding field is not set in any of the ADH. 1559 7.1.2. Data Content 1561 /* 1562 * Use an enum such that we can extend new types. 1563 */ 1564 enum data_content4 { 1565 NFS4_CONTENT_DATA = 0, 1566 NFS4_CONTENT_APP_DATA_HOLE = 1, 1567 NFS4_CONTENT_HOLE = 2 1568 }; 1570 New operations might need to differentiate between wanting to access 1571 data versus an ADH. Also, future minor versions might want to 1572 introduce new data formats. This enumeration allows that to occur. 1574 7.2. An Example of Detecting Corruption 1576 In this section, we define an ADH format in which corruption can be 1577 detected. Note that this is just one possible format and means to 1578 detect corruption. 1580 Consider a very basic implementation of an operating system's disk 1581 blocks. A block is either data or it is an indirect block which 1582 allows for files to be larger than one block. It is desired to be 1583 able to initialize a block. Lastly, to quickly unlink a file, a 1584 block can be marked invalid. The contents remain intact - which 1585 would enable this OS application to undelete a file. 1587 The application defines 4k sized data blocks, with an 8 byte block 1588 counter occurring at offset 0 in the block, and with the guard 1589 pattern occurring at offset 8 inside the block. Furthermore, the 1590 guard pattern can take one of four states: 1592 0xfeedface - This is the FREE state and indicates that the ADH 1593 format has been applied. 1595 0xcafedead - This is the DATA state and indicates that real data 1596 has been written to this block. 1598 0xe4e5c001 - This is the INDIRECT state and indicates that the 1599 block contains block counter numbers that are chained off of this 1600 block. 1602 0xba1ed4a3 - This is the INVALID state and indicates that the block 1603 contains data whose contents are garbage. 1605 Finally, it also defines an 8 byte checksum [18] starting at byte 16 1606 which applies to the remaining contents of the block. If the state 1607 is FREE, then that checksum is trivially zero. As such, the 1608 application has no need to transfer the checksum implicitly inside 1609 the ADH - it need not make the transfer layer aware of the fact that 1610 there is a checksum (see [16] for an example of checksums used to 1611 detect corruption in application data blocks). 1613 Corruption in each ADH can thus be detected: 1615 o If the guard pattern is anything other than one of the allowed 1616 values, including all zeros. 1618 o If the guard pattern is FREE and any other byte in the remainder 1619 of the ADH is anything other than zero. 1621 o If the guard pattern is anything other than FREE, then if the 1622 stored checksum does not match the computed checksum. 1624 o If the guard pattern is INDIRECT and one of the stored indirect 1625 block numbers has a value greater than the number of ADHs in the 1626 file. 1628 o If the guard pattern is INDIRECT and one of the stored indirect 1629 block numbers is a duplicate of another stored indirect block 1630 number. 1632 As can be seen, the application can detect errors based on the 1633 combination of the guard pattern state and the checksum. But also, 1634 the application can detect corruption based on the state and the 1635 contents of the ADH. This last point is important in validating the 1636 minimum amount of data we incorporated into our generic framework. 1637 I.e., the guard pattern is sufficient in allowing applications to 1638 design their own corruption detection. 1640 Finally, it is important to note that none of these corruption checks 1641 occur in the transport layer. The server and client components are 1642 totally unaware of the file format and might report everything as 1643 being transferred correctly even in the case the application detects 1644 corruption. 1646 7.3. Example of READ_PLUS 1648 The hypothetical application presented in Section 7.2 can be used to 1649 illustrate how READ_PLUS would return an array of results. A file is 1650 created and initialized with 100 4k ADHs in the FREE state: 1652 WRITE_PLUS {0, 4k, 100, 0, 0, 8, 0xfeedface} 1654 Further, assume the application writes a single ADH at 16k, changing 1655 the guard pattern to 0xcafedead, we would then have in memory: 1657 0 -> (16k - 1) : 4k, 4, 0, 0, 8, 0xfeedface 1658 16k -> (20k - 1) : 00 00 00 05 ca fe de ad XX XX ... XX XX 1659 20k -> 400k : 4k, 95, 0, 6, 0xfeedface 1661 And when the client did a READ_PLUS of 64k at the start of the file, 1662 it would get back a result of an ADH, some data, and a final ADH: 1664 ADH {0, 4, 0, 0, 8, 0xfeedface} 1665 data 4k 1666 ADH {20k, 4k, 59, 0, 6, 0xfeedface} 1668 8. Labeled NFS 1670 8.1. Introduction 1672 Access control models such as Unix permissions or Access Control 1673 Lists are commonly referred to as Discretionary Access Control (DAC) 1674 models. These systems base their access decisions on user identity 1675 and resource ownership. In contrast Mandatory Access Control (MAC) 1676 models base their access control decisions on the label on the 1677 subject (usually a process) and the object it wishes to access [19]. 1678 These labels may contain user identity information but usually 1679 contain additional information. In DAC systems users are free to 1680 specify the access rules for resources that they own. MAC models 1681 base their security decisions on a system wide policy established by 1682 an administrator or organization which the users do not have the 1683 ability to override. In this section, we add a MAC model to NFSv4.2. 1685 The first change necessary is to devise a method for transporting and 1686 storing security label data on NFSv4 file objects. Security labels 1687 have several semantics that are met by NFSv4 recommended attributes 1688 such as the ability to set the label value upon object creation. 1689 Access control on these attributes are done through a combination of 1690 two mechanisms. As with other recommended attributes on file objects 1691 the usual DAC checks (ACLs and permission bits) will be performed to 1692 ensure that proper file ownership is enforced. In addition a MAC 1693 system MAY be employed on the client, server, or both to enforce 1694 additional policy on what subjects may modify security label 1695 information. 1697 The second change is to provide methods for the client to determine 1698 if the security label has changed. A client which needs to know if a 1699 label is going to change SHOULD request a delegation on that file. 1700 In order to change the security label, the server will have to recall 1701 all delegations. This will inform the client of the change. If a 1702 client wants to detect if the label has changed, it MAY use VERIFY 1703 and NVERIFY on FATTR4_CHANGE_SEC_LABEL to detect that the 1704 FATTR4_SEC_LABEL has been modified. 1706 The final change necessary is a modification to the RPC layer used in 1707 NFSv4 in the form of a new version of the RPCSEC_GSS [6] framework. 1708 In order for an NFSv4 server to apply MAC checks it must obtain 1709 additional information from the client. Several methods were 1710 explored for performing this and it was decided that the best 1711 approach was to incorporate the ability to make security attribute 1712 assertions through the RPC mechanism. RPCSECGSSv3 [4] outlines a 1713 method to assert additional security information such as security 1714 labels on gss context creation and have that data bound to all RPC 1715 requests that make use of that context. 1717 8.2. Definitions 1719 Label Format Specifier (LFS): is an identifier used by the client to 1720 establish the syntactic format of the security label and the 1721 semantic meaning of its components. These specifiers exist in a 1722 registry associated with documents describing the format and 1723 semantics of the label. 1725 Label Format Registry: is the IANA registry containing all 1726 registered LFS along with references to the documents that 1727 describe the syntactic format and semantics of the security label. 1729 Policy Identifier (PI): is an optional part of the definition of a 1730 Label Format Specifier which allows for clients and server to 1731 identify specific security policies. 1733 Object: is a passive resource within the system that we wish to be 1734 protected. Objects can be entities such as files, directories, 1735 pipes, sockets, and many other system resources relevant to the 1736 protection of the system state. 1738 Subject: is an active entity usually a process which is requesting 1739 access to an object. 1741 MAC-Aware: is a server which can transmit and store object labels. 1743 MAC-Functional: is a client or server which is Labeled NFS enabled. 1744 Such a system can interpret labels and apply policies based on the 1745 security system. 1747 Multi-Level Security (MLS): is a traditional model where objects are 1748 given a sensitivity level (Unclassified, Secret, Top Secret, etc) 1749 and a category set [20]. 1751 8.3. MAC Security Attribute 1753 MAC models base access decisions on security attributes bound to 1754 subjects and objects. This information can range from a user 1755 identity for an identity based MAC model, sensitivity levels for 1756 Multi-level security, or a type for Type Enforcement. These models 1757 base their decisions on different criteria but the semantics of the 1758 security attribute remain the same. The semantics required by the 1759 security attributes are listed below: 1761 o MUST provide flexibility with respect to the MAC model. 1763 o MUST provide the ability to atomically set security information 1764 upon object creation. 1766 o MUST provide the ability to enforce access control decisions both 1767 on the client and the server. 1769 o MUST NOT expose an object to either the client or server name 1770 space before its security information has been bound to it. 1772 NFSv4 implements the security attribute as a recommended attribute. 1773 These attributes have a fixed format and semantics, which conflicts 1774 with the flexible nature of the security attribute. To resolve this 1775 the security attribute consists of two components. The first 1776 component is a LFS as defined in [21] to allow for interoperability 1777 between MAC mechanisms. The second component is an opaque field 1778 which is the actual security attribute data. To allow for various 1779 MAC models, NFSv4 should be used solely as a transport mechanism for 1780 the security attribute. It is the responsibility of the endpoints to 1781 consume the security attribute and make access decisions based on 1782 their respective models. In addition, creation of objects through 1783 OPEN and CREATE allows for the security attribute to be specified 1784 upon creation. By providing an atomic create and set operation for 1785 the security attribute it is possible to enforce the second and 1786 fourth requirements. The recommended attribute FATTR4_SEC_LABEL (see 1787 Section 12.2.2) will be used to satisfy this requirement. 1789 8.3.1. Delegations 1791 In the event that a security attribute is changed on the server while 1792 a client holds a delegation on the file, both the server and the 1793 client MUST follow the NFSv4.1 protocol (see Chapter 10 of [1]) with 1794 respect to attribute changes. It SHOULD flush all changes back to 1795 the server and relinquish the delegation. 1797 8.3.2. Permission Checking 1799 It is not feasible to enumerate all possible MAC models and even 1800 levels of protection within a subset of these models. This means 1801 that the NFSv4 client and servers cannot be expected to directly make 1802 access control decisions based on the security attribute. Instead 1803 NFSv4 should defer permission checking on this attribute to the host 1804 system. These checks are performed in addition to existing DAC and 1805 ACL checks outlined in the NFSv4 protocol. Section 8.6 gives a 1806 specific example of how the security attribute is handled under a 1807 particular MAC model. 1809 8.3.3. Object Creation 1811 When creating files in NFSv4 the OPEN and CREATE operations are used. 1812 One of the parameters to these operations is an fattr4 structure 1813 containing the attributes the file is to be created with. This 1814 allows NFSv4 to atomically set the security attribute of files upon 1815 creation. When a client is MAC-Functional it must always provide the 1816 initial security attribute upon file creation. In the event that the 1817 server is MAC-Functional as well, it should determine by policy 1818 whether it will accept the attribute from the client or instead make 1819 the determination itself. If the client is not MAC-Functional, then 1820 the MAC-Functional server must decide on a default label. A more in 1821 depth explanation can be found in Section 8.6. 1823 8.3.4. Existing Objects 1825 Note that under the MAC model, all objects must have labels. 1826 Therefore, if an existing server is upgraded to include Labeled NFS 1827 support, then it is the responsibility of the security system to 1828 define the behavior for existing objects. 1830 8.3.5. Label Changes 1832 If there are open delegations on the file belonging to client other 1833 than the one making the label change, then the process described in 1834 Section 8.3.1 must be followed. In short, the delegation will be 1835 recalled, which effectively notifies the client of the change. 1837 As the server is always presented with the subject label from the 1838 client, it does not necessarily need to communicate the fact that the 1839 label has changed to the client. In the cases where the change 1840 outright denies the client access, the client will be able to quickly 1841 determine that there is a new label in effect. 1843 Consider a system in which the clients enforce MAC checks and and the 1844 server has a very simple security system which just stores the 1845 labels. In this system, the MAC label check always allows access, 1846 regardless of the subject label. 1848 The way in which MAC labels are enforced is by the client. The 1849 security policies on the client can be such that the client does not 1850 have access to the file unless it has a delegation. The recall of 1851 the delegation will force the client to flush any cached content of 1852 the file. The clients could also be configured to periodically 1853 VERIFY/NVERIFY the FATTR4_CHANGE_SEC_LABEL attribute to determine 1854 when the label has changed. When a change is detected, then the 1855 client could take the costlier action of retrieving the 1856 FATTR4_SEC_LABEL. 1858 8.4. pNFS Considerations 1860 This section examines the issues in deploying Labeled NFS in a pNFS 1861 community of servers. 1863 8.4.1. MAC Label Checks 1865 The new FATTR4_SEC_LABEL attribute is metadata information and as 1866 such the DS is not aware of the value contained on the MDS. 1867 Fortunately, the NFSv4.1 protocol [1] already has provisions for 1868 doing access level checks from the DS to the MDS. In order for the 1869 DS to validate the subject label presented by the client, it SHOULD 1870 utilize this mechanism. 1872 8.5. Discovery of Server Labeled NFS Support 1874 The server can easily determine that a client supports Labeled NFS 1875 when it queries for the FATTR4_SEC_LABEL label for an object. Note 1876 that it cannot assume that the presence of RPCSEC_GSSv3 indicates 1877 Labeled NFS support. The client might need to discover which LFS the 1878 server supports. 1880 A server which supports Labeled NFS MUST allow a client with any 1881 subject label to retrieve the FATTR4_SEC_LABEL attribute for the root 1882 filehandle, ROOTFH. The following compound must always succeed as 1883 far as a MAC label check is concerned: 1885 PUTROOTFH, GETATTR {FATTR4_SEC_LABEL} 1887 Note that the server might have imposed a security flavor on the root 1888 that precludes such access. I.e., if the server requires kerberized 1889 access and the client presents a compound with AUTH_SYS, then the 1890 server is allowed to return NFS4ERR_WRONGSEC in this case. But if 1891 the client presents a correct security flavor, then the server MUST 1892 return the FATTR4_SEC_LABEL attribute with the supported LFS filled 1893 in. 1895 8.6. MAC Security NFS Modes of Operation 1897 A system using Labeled NFS may operate in two modes. The first mode 1898 provides the most protection and is called "full mode". In this mode 1899 both the client and server implement a MAC model allowing each end to 1900 make an access control decision. The remaining mode is called the 1901 "guest mode" and in this mode one end of the connection is not 1902 implementing a MAC model and thus offers less protection than full 1903 mode. 1905 8.6.1. Full Mode 1907 Full mode environments consist of MAC-Functional NFSv4 servers and 1908 clients and may be composed of mixed MAC models and policies. The 1909 system requires that both the client and server have an opportunity 1910 to perform an access control check based on all relevant information 1911 within the network. The file object security attribute is provided 1912 using the mechanism described in Section 8.3. The security attribute 1913 of the subject making the request is transported at the RPC layer 1914 using the mechanism described in RPCSECGSSv3 [4]. 1916 8.6.1.1. Initial Labeling and Translation 1918 The ability to create a file is an action that a MAC model may wish 1919 to mediate. The client is given the responsibility to determine the 1920 initial security attribute to be placed on a file. This allows the 1921 client to make a decision as to the acceptable security attributes to 1922 create a file with before sending the request to the server. Once 1923 the server receives the creation request from the client it may 1924 choose to evaluate if the security attribute is acceptable. 1926 Security attributes on the client and server may vary based on MAC 1927 model and policy. To handle this the security attribute field has an 1928 LFS component. This component is a mechanism for the host to 1929 identify the format and meaning of the opaque portion of the security 1930 attribute. A full mode environment may contain hosts operating in 1931 several different LFSs. In this case a mechanism for translating the 1932 opaque portion of the security attribute is needed. The actual 1933 translation function will vary based on MAC model and policy and is 1934 out of the scope of this document. If a translation is unavailable 1935 for a given LFS then the request MUST be denied. Another recourse is 1936 to allow the host to provide a fallback mapping for unknown security 1937 attributes. 1939 8.6.1.2. Policy Enforcement 1941 In full mode access control decisions are made by both the clients 1942 and servers. When a client makes a request it takes the security 1943 attribute from the requesting process and makes an access control 1944 decision based on that attribute and the security attribute of the 1945 object it is trying to access. If the client denies that access an 1946 RPC call to the server is never made. If however the access is 1947 allowed the client will make a call to the NFS server. 1949 When the server receives the request from the client it extracts the 1950 security attribute conveyed in the RPC request. The server then uses 1951 this security attribute and the attribute of the object the client is 1952 trying to access to make an access control decision. If the server's 1953 policy allows this access it will fulfill the client's request, 1954 otherwise it will return NFS4ERR_ACCESS. 1956 Implementations MAY validate security attributes supplied over the 1957 network to ensure that they are within a set of attributes permitted 1958 from a specific peer, and if not, reject them. Note that a system 1959 may permit a different set of attributes to be accepted from each 1960 peer. 1962 8.6.1.3. Limited Server 1964 A Limited Server mode (see Section 3.5.2 of [19]) consists of a 1965 server which is label aware, but does not enforce policies. Such a 1966 server will store and retrieve all object labels presented by 1967 clients, utilize the methods described in Section 8.3.5 to allow the 1968 clients to detect changing labels,, but will not restrict access via 1969 the subject label. Instead, it will expect the clients to enforce 1970 all such access locally. 1972 8.6.2. Guest Mode 1974 Guest mode implies that either the client or the server does not 1975 handle labels. If the client is not Labeled NFS aware, then it will 1976 not offer subject labels to the server. The server is the only 1977 entity enforcing policy, and may selectively provide standard NFS 1978 services to clients based on their authentication credentials and/or 1979 associated network attributes (e.g., IP address, network interface). 1980 The level of trust and access extended to a client in this mode is 1981 configuration-specific. If the server is not Labeled NFS aware, then 1982 it will not return object labels to the client. Clients in this 1983 environment are may consist of groups implementing different MAC 1984 model policies. The system requires that all clients in the 1985 environment be responsible for access control checks. 1987 8.7. Security Considerations 1989 This entire chapter deals with security issues. 1991 Depending on the level of protection the MAC system offers there may 1992 be a requirement to tightly bind the security attribute to the data. 1994 When only one of the client or server enforces labels, it is 1995 important to realize that the other side is not enforcing MAC 1996 protections. Alternate methods might be in use to handle the lack of 1997 MAC support and care should be taken to identify and mitigate threats 1998 from possible tampering outside of these methods. 2000 An example of this is that a server that modifies READDIR or LOOKUP 2001 results based on the client's subject label might want to always 2002 construct the same subject label for a client which does not present 2003 one. This will prevent a non-Labeled NFS client from mixing entries 2004 in the directory cache. 2006 9. Sharing change attribute implementation details with NFSv4 clients 2008 9.1. Introduction 2010 Although both the NFSv4 [9] and NFSv4.1 protocol [1], define the 2011 change attribute as being mandatory to implement, there is little in 2012 the way of guidance. The only mandated feature is that the value 2013 must change whenever the file data or metadata change. 2015 While this allows for a wide range of implementations, it also leaves 2016 the client with a conundrum: how does it determine which is the most 2017 recent value for the change attribute in a case where several RPC 2018 calls have been issued in parallel? In other words if two COMPOUNDs, 2019 both containing WRITE and GETATTR requests for the same file, have 2020 been issued in parallel, how does the client determine which of the 2021 two change attribute values returned in the replies to the GETATTR 2022 requests correspond to the most recent state of the file? In some 2023 cases, the only recourse may be to send another COMPOUND containing a 2024 third GETATTR that is fully serialized with the first two. 2026 NFSv4.2 avoids this kind of inefficiency by allowing the server to 2027 share details about how the change attribute is expected to evolve, 2028 so that the client may immediately determine which, out of the 2029 several change attribute values returned by the server, is the most 2030 recent. change_attr_type is defined as a new recommended attribute 2031 (see Section 12.2.1), and is per file system. 2033 10. Security Considerations 2035 NFSv4.2 has all of the security concerns present in NFSv4.1 (see 2036 Section 21 of [1]) and those present in the Server-side Copy (see 2037 Section 3.4) and in Labeled NFS (see Section 8.7). 2039 11. Error Values 2041 NFS error numbers are assigned to failed operations within a Compound 2042 (COMPOUND or CB_COMPOUND) request. A Compound request contains a 2043 number of NFS operations that have their results encoded in sequence 2044 in a Compound reply. The results of successful operations will 2045 consist of an NFS4_OK status followed by the encoded results of the 2046 operation. If an NFS operation fails, an error status will be 2047 entered in the reply and the Compound request will be terminated. 2049 11.1. Error Definitions 2051 Protocol Error Definitions 2053 +--------------------------+--------+------------------+ 2054 | Error | Number | Description | 2055 +--------------------------+--------+------------------+ 2056 | NFS4ERR_BADLABEL | 10093 | Section 11.1.3.1 | 2057 | NFS4ERR_METADATA_NOTSUPP | 10090 | Section 11.1.2.1 | 2058 | NFS4ERR_OFFLOAD_DENIED | 10091 | Section 11.1.2.2 | 2059 | NFS4ERR_PARTNER_NO_AUTH | 10089 | Section 11.1.2.3 | 2060 | NFS4ERR_PARTNER_NOTSUPP | 10088 | Section 11.1.2.4 | 2061 | NFS4ERR_UNION_NOTSUPP | 10094 | Section 11.1.1.1 | 2062 | NFS4ERR_WRONG_LFS | 10092 | Section 11.1.3.2 | 2063 +--------------------------+--------+------------------+ 2065 Table 1 2067 11.1.1. General Errors 2069 This section deals with errors that are applicable to a broad set of 2070 different purposes. 2072 11.1.1.1. NFS4ERR_UNION_NOTSUPP (Error Code 10094) 2074 One of the arguments to the operation is a discriminated union and 2075 while the server supports the given operation, it does not support 2076 the selected arm of the discriminated union. For an example, see 2077 READ_PLUS (Section 14.10). 2079 11.1.2. Server to Server Copy Errors 2081 These errors deal with the interaction between server to server 2082 copies. 2084 11.1.2.1. NFS4ERR_METADATA_NOTSUPP (Error Code 10090) 2086 The destination file cannot support the same metadata as the source 2087 file. 2089 11.1.2.2. NFS4ERR_OFFLOAD_DENIED (Error Code 10091) 2091 The copy offload operation is supported by both the source and the 2092 destination, but the destination is not allowing it for this file. 2093 If the client sees this error, it should fall back to the normal copy 2094 semantics. 2096 11.1.2.3. NFS4ERR_PARTNER_NO_AUTH (Error Code 10089) 2098 The source server does not authorize a server-to-server copy offload 2099 operation. This may be due to the client's failure to send the 2100 COPY_NOTIFY operation to the source server, the source server 2101 receiving a server-to-server copy offload request after the copy 2102 lease time expired, or for some other permission problem. 2104 11.1.2.4. NFS4ERR_PARTNER_NOTSUPP (Error Code 10088) 2106 The remote server does not support the server-to-server copy offload 2107 protocol. 2109 11.1.3. Labeled NFS Errors 2111 These errors are used in Labeled NFS. 2113 11.1.3.1. NFS4ERR_BADLABEL (Error Code 10093) 2115 The label specified is invalid in some manner. 2117 11.1.3.2. NFS4ERR_WRONG_LFS (Error Code 10092) 2119 The LFS specified in the subject label is not compatible with the LFS 2120 in the object label. 2122 11.2. New Operations and Their Valid Errors 2124 This section contains a table that gives the valid error returns for 2125 each new NFSv4.2 protocol operation. The error code NFS4_OK 2126 (indicating no error) is not listed but should be understood to be 2127 returnable by all new operations. The error values for all other 2128 operations are defined in Section 15.2 of [1]. 2130 Valid Error Returns for Each New Protocol Operation 2132 +----------------+--------------------------------------------------+ 2133 | Operation | Errors | 2134 +----------------+--------------------------------------------------+ 2135 | COPY | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | 2136 | | NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, | 2137 | | NFS4ERR_DEADSESSION, NFS4ERR_DELAY, | 2138 | | NFS4ERR_DELEG_REVOKED, NFS4ERR_DQUOT, | 2139 | | NFS4ERR_EXPIRED, NFS4ERR_FBIG, | 2140 | | NFS4ERR_FHEXPIRED, NFS4ERR_GRACE, NFS4ERR_INVAL, | 2141 | | NFS4ERR_IO, NFS4ERR_ISDIR, NFS4ERR_LOCKED, | 2142 | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | 2143 | | NFS4ERR_NOSPC, NFS4ERR_OFFLOAD_DENIED, | 2144 | | NFS4ERR_OLD_STATEID, NFS4ERR_OPENMODE, | 2145 | | NFS4ERR_OP_NOT_IN_SESSION, | 2146 | | NFS4ERR_PARTNER_NO_AUTH, | 2147 | | NFS4ERR_PARTNER_NOTSUPP, NFS4ERR_PNFS_IO_HOLE, | 2148 | | NFS4ERR_PNFS_NO_LAYOUT, NFS4ERR_REP_TOO_BIG, | 2149 | | NFS4ERR_REP_TOO_BIG_TO_CACHE, | 2150 | | NFS4ERR_REQ_TOO_BIG, NFS4ERR_RETRY_UNCACHED_REP, | 2151 | | NFS4ERR_ROFS, NFS4ERR_SERVERFAULT, | 2152 | | NFS4ERR_STALE, NFS4ERR_SYMLINK, | 2153 | | NFS4ERR_TOO_MANY_OPS, NFS4ERR_WRONG_TYPE | 2154 | COPY_NOTIFY | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | 2155 | | NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, | 2156 | | NFS4ERR_DEADSESSION, NFS4ERR_DELAY, | 2157 | | NFS4ERR_DELEG_REVOKED, NFS4ERR_EXPIRED, | 2158 | | NFS4ERR_FHEXPIRED, NFS4ERR_GRACE, NFS4ERR_INVAL, | 2159 | | NFS4ERR_ISDIR, NFS4ERR_IO, NFS4ERR_LOCKED, | 2160 | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | 2161 | | NFS4ERR_OLD_STATEID, NFS4ERR_OPENMODE, | 2162 | | NFS4ERR_OP_NOT_IN_SESSION, NFS4ERR_PNFS_IO_HOLE, | 2163 | | NFS4ERR_PNFS_NO_LAYOUT, NFS4ERR_REP_TOO_BIG, | 2164 | | NFS4ERR_REP_TOO_BIG_TO_CACHE, | 2165 | | NFS4ERR_REQ_TOO_BIG, NFS4ERR_RETRY_UNCACHED_REP, | 2166 | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | 2167 | | NFS4ERR_SYMLINK, NFS4ERR_TOO_MANY_OPS, | 2168 | | NFS4ERR_WRONG_TYPE | 2169 | OFFLOAD_ABORT | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BADXDR, | 2170 | | NFS4ERR_BAD_STATEID, NFS4ERR_COMPLETE_ALREADY, | 2171 | | NFS4ERR_DEADSESSION, NFS4ERR_EXPIRED, | 2172 | | NFS4ERR_DELAY, NFS4ERR_GRACE, NFS4ERR_NOTSUPP, | 2173 | | NFS4ERR_OLD_STATEID, NFS4ERR_OP_NOT_IN_SESSION, | 2174 | | NFS4ERR_SERVERFAULT, NFS4ERR_TOO_MANY_OPS | 2175 | OFFLOAD_REVOKE | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BADXDR, | 2176 | | NFS4ERR_COMPLETE_ALREADY, NFS4ERR_DELAY, | 2177 | | NFS4ERR_GRACE, NFS4ERR_INVALID, NFS4ERR_MOVED, | 2178 | | NFS4ERR_NOTSUPP, NFS4ERR_OP_NOT_IN_SESSION, | 2179 | | NFS4ERR_SERVERFAULT, NFS4ERR_TOO_MANY_OPS | 2180 | OFFLOAD_STATUS | NFS4ERR_ADMIN_REVOKED, NFS4ERR_BADXDR, | 2181 | | NFS4ERR_BAD_STATEID, NFS4ERR_COMPLETE_ALREADY, | 2182 | | NFS4ERR_DEADSESSION, NFS4ERR_EXPIRED, | 2183 | | NFS4ERR_DELAY, NFS4ERR_GRACE, NFS4ERR_NOTSUPP, | 2184 | | NFS4ERR_OLD_STATEID, NFS4ERR_OP_NOT_IN_SESSION, | 2185 | | NFS4ERR_SERVERFAULT, NFS4ERR_TOO_MANY_OPS | 2186 | READ_PLUS | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | 2187 | | NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, | 2188 | | NFS4ERR_DEADSESSION, NFS4ERR_DELAY, | 2189 | | NFS4ERR_DELEG_REVOKED, NFS4ERR_EXPIRED, | 2190 | | NFS4ERR_FHEXPIRED, NFS4ERR_GRACE, NFS4ERR_INVAL, | 2191 | | NFS4ERR_ISDIR, NFS4ERR_IO, NFS4ERR_LOCKED, | 2192 | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | 2193 | | NFS4ERR_OLD_STATEID, NFS4ERR_OPENMODE, | 2194 | | NFS4ERR_OP_NOT_IN_SESSION, NFS4ERR_PNFS_IO_HOLE, | 2195 | | NFS4ERR_PNFS_NO_LAYOUT, NFS4ERR_REP_TOO_BIG, | 2196 | | NFS4ERR_REP_TOO_BIG_TO_CACHE, | 2197 | | NFS4ERR_REQ_TOO_BIG, NFS4ERR_RETRY_UNCACHED_REP, | 2198 | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | 2199 | | NFS4ERR_SYMLINK, NFS4ERR_TOO_MANY_OPS, | 2200 | | NFS4ERR_UNION_NOTSUPP, NFS4ERR_WRONG_TYPE | 2201 | SEEK | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | 2202 | | NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, | 2203 | | NFS4ERR_DEADSESSION, NFS4ERR_DELAY, | 2204 | | NFS4ERR_DELEG_REVOKED, NFS4ERR_EXPIRED, | 2205 | | NFS4ERR_FHEXPIRED, NFS4ERR_GRACE, NFS4ERR_INVAL, | 2206 | | NFS4ERR_ISDIR, NFS4ERR_IO, NFS4ERR_LOCKED, | 2207 | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | 2208 | | NFS4ERR_OLD_STATEID, NFS4ERR_OPENMODE, | 2209 | | NFS4ERR_OP_NOT_IN_SESSION, NFS4ERR_PNFS_IO_HOLE, | 2210 | | NFS4ERR_PNFS_NO_LAYOUT, NFS4ERR_REP_TOO_BIG, | 2211 | | NFS4ERR_REP_TOO_BIG_TO_CACHE, | 2212 | | NFS4ERR_REQ_TOO_BIG, NFS4ERR_RETRY_UNCACHED_REP, | 2213 | | NFS4ERR_SERVERFAULT, NFS4ERR_STALE, | 2214 | | NFS4ERR_SYMLINK, NFS4ERR_TOO_MANY_OPS, | 2215 | | NFS4ERR_UNION_NOTSUPP, NFS4ERR_WRONG_TYPE | 2216 | SEQUENCE | NFS4ERR_BADSESSION, NFS4ERR_BADSLOT, | 2217 | | NFS4ERR_BADXDR, NFS4ERR_BAD_HIGH_SLOT, | 2218 | | NFS4ERR_CONN_NOT_BOUND_TO_SESSION, | 2219 | | NFS4ERR_DEADSESSION, NFS4ERR_DELAY, | 2220 | | NFS4ERR_REP_TOO_BIG, | 2221 | | NFS4ERR_REP_TOO_BIG_TO_CACHE, | 2222 | | NFS4ERR_REQ_TOO_BIG, NFS4ERR_RETRY_UNCACHED_REP, | 2223 | | NFS4ERR_SEQUENCE_POS, NFS4ERR_SEQ_FALSE_RETRY, | 2224 | | NFS4ERR_SEQ_MISORDERED, NFS4ERR_TOO_MANY_OPS | 2225 | WRITE_PLUS | NFS4ERR_ACCESS, NFS4ERR_ADMIN_REVOKED, | 2226 | | NFS4ERR_BADXDR, NFS4ERR_BAD_STATEID, | 2227 | | NFS4ERR_DEADSESSION, NFS4ERR_DELAY, | 2228 | | NFS4ERR_DELEG_REVOKED, NFS4ERR_DQUOT, | 2229 | | NFS4ERR_EXPIRED, NFS4ERR_FBIG, | 2230 | | NFS4ERR_FHEXPIRED, NFS4ERR_GRACE, NFS4ERR_INVAL, | 2231 | | NFS4ERR_IO, NFS4ERR_ISDIR, NFS4ERR_LOCKED, | 2232 | | NFS4ERR_MOVED, NFS4ERR_NOFILEHANDLE, | 2233 | | NFS4ERR_NOSPC, NFS4ERR_OLD_STATEID, | 2234 | | NFS4ERR_OPENMODE, NFS4ERR_OP_NOT_IN_SESSION, | 2235 | | NFS4ERR_PNFS_IO_HOLE, NFS4ERR_PNFS_NO_LAYOUT, | 2236 | | NFS4ERR_REP_TOO_BIG, | 2237 | | NFS4ERR_REP_TOO_BIG_TO_CACHE, | 2238 | | NFS4ERR_REQ_TOO_BIG, NFS4ERR_RETRY_UNCACHED_REP, | 2239 | | NFS4ERR_ROFS, NFS4ERR_SERVERFAULT, | 2240 | | NFS4ERR_STALE, NFS4ERR_SYMLINK, | 2241 | | NFS4ERR_TOO_MANY_OPS, NFS4ERR_UNION_NOTSUPP, | 2242 | | NFS4ERR_WRONG_TYPE | 2243 +----------------+--------------------------------------------------+ 2245 Table 2 2247 11.3. New Callback Operations and Their Valid Errors 2249 This section contains a table that gives the valid error returns for 2250 each new NFSv4.2 callback operation. The error code NFS4_OK 2251 (indicating no error) is not listed but should be understood to be 2252 returnable by all new callback operations. The error values for all 2253 other callback operations are defined in Section 15.3 of [1]. 2255 Valid Error Returns for Each New Protocol Callback Operation 2257 +------------+------------------------------------------------------+ 2258 | Callback | Errors | 2259 | Operation | | 2260 +------------+------------------------------------------------------+ 2261 | CB_OFFLOAD | NFS4ERR_BADHANDLE, NFS4ERR_BADXDR, | 2262 | | NFS4ERR_BAD_STATEID, NFS4ERR_DELAY, | 2263 | | NFS4ERR_OP_NOT_IN_SESSION, NFS4ERR_REP_TOO_BIG, | 2264 | | NFS4ERR_REP_TOO_BIG_TO_CACHE, NFS4ERR_REQ_TOO_BIG, | 2265 | | NFS4ERR_RETRY_UNCACHED_REP, NFS4ERR_SERVERFAULT, | 2266 | | NFS4ERR_TOO_MANY_OPS | 2267 +------------+------------------------------------------------------+ 2269 Table 3 2271 12. New File Attributes 2273 12.1. New RECOMMENDED Attributes - List and Definition References 2275 The list of new RECOMMENDED attributes appears in Table 4. The 2276 meaning of the columns of the table are: 2278 Name: The name of the attribute. 2280 Id: The number assigned to the attribute. In the event of conflicts 2281 between the assigned number and [2], the latter is likely 2282 authoritative, but should be resolved with Errata to this document 2283 and/or [2]. See [22] for the Errata process. 2285 Data Type: The XDR data type of the attribute. 2287 Acc: Access allowed to the attribute. 2289 R means read-only (GETATTR may retrieve, SETATTR may not set). 2291 W means write-only (SETATTR may set, GETATTR may not retrieve). 2293 R W means read/write (GETATTR may retrieve, SETATTR may set). 2295 Defined in: The section of this specification that describes the 2296 attribute. 2298 +------------------+----+-------------------+-----+----------------+ 2299 | Name | Id | Data Type | Acc | Defined in | 2300 +------------------+----+-------------------+-----+----------------+ 2301 | change_attr_type | 79 | change_attr_type4 | R | Section 12.2.1 | 2302 | sec_label | 80 | sec_label4 | R W | Section 12.2.2 | 2303 | change_sec_label | 81 | change_sec_label4 | R | Section 12.2.3 | 2304 | space_reserved | 77 | boolean | R W | Section 12.2.4 | 2305 | space_freed | 78 | length4 | R | Section 12.2.5 | 2306 +------------------+----+-------------------+-----+----------------+ 2308 Table 4 2310 12.2. Attribute Definitions 2312 12.2.1. Attribute 79: change_attr_type 2314 enum change_attr_type4 { 2315 NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR = 0, 2316 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER = 1, 2317 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS = 2, 2318 NFS4_CHANGE_TYPE_IS_TIME_METADATA = 3, 2319 NFS4_CHANGE_TYPE_IS_UNDEFINED = 4 2320 }; 2322 change_attr_type is a per file system attribute which enables the 2323 NFSv4.2 server to provide additional information about how it expects 2324 the change attribute value to evolve after the file data, or metadata 2325 has changed. While Section 5.4 of [1] discusses per file system 2326 attributes, it is expected that the value of change_attr_type not 2327 depend on the value of "homogeneous" and only changes in the event of 2328 a migration. 2330 NFS4_CHANGE_TYPE_IS_UNDEFINED: The change attribute does not take 2331 values that fit into any of these categories. 2333 NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR: The change attribute value MUST 2334 monotonically increase for every atomic change to the file 2335 attributes, data, or directory contents. 2337 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER: The change attribute value MUST 2338 be incremented by one unit for every atomic change to the file 2339 attributes, data, or directory contents. This property is 2340 preserved when writing to pNFS data servers. 2342 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS: The change attribute 2343 value MUST be incremented by one unit for every atomic change to 2344 the file attributes, data, or directory contents. In the case 2345 where the client is writing to pNFS data servers, the number of 2346 increments is not guaranteed to exactly match the number of 2347 writes. 2349 NFS4_CHANGE_TYPE_IS_TIME_METADATA: The change attribute is 2350 implemented as suggested in the NFSv4 spec [9] in terms of the 2351 time_metadata attribute. 2353 If either NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR, 2354 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, or 2355 NFS4_CHANGE_TYPE_IS_TIME_METADATA are set, then the client knows at 2356 the very least that the change attribute is monotonically increasing, 2357 which is sufficient to resolve the question of which value is the 2358 most recent. 2360 If the client sees the value NFS4_CHANGE_TYPE_IS_TIME_METADATA, then 2361 by inspecting the value of the 'time_delta' attribute it additionally 2362 has the option of detecting rogue server implementations that use 2363 time_metadata in violation of the spec. 2365 If the client sees NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, it has the 2366 ability to predict what the resulting change attribute value should 2367 be after a COMPOUND containing a SETATTR, WRITE, or CREATE. This 2368 again allows it to detect changes made in parallel by another client. 2369 The value NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS permits the 2370 same, but only if the client is not doing pNFS WRITEs. 2372 Finally, if the server does not support change_attr_type or if 2373 NFS4_CHANGE_TYPE_IS_UNDEFINED is set, then the server SHOULD make an 2374 effort to implement the change attribute in terms of the 2375 time_metadata attribute. 2377 12.2.2. Attribute 80: sec_label 2379 typedef uint32_t policy4; 2381 struct labelformat_spec4 { 2382 policy4 lfs_lfs; 2383 policy4 lfs_pi; 2384 }; 2386 struct sec_label4 { 2387 labelformat_spec4 slai_lfs; 2388 opaque slai_data<>; 2389 }; 2391 The FATTR4_SEC_LABEL contains an array of two components with the 2392 first component being an LFS. It serves to provide the receiving end 2393 with the information necessary to translate the security attribute 2394 into a form that is usable by the endpoint. Label Formats assigned 2395 an LFS may optionally choose to include a Policy Identifier field to 2396 allow for complex policy deployments. The LFS and Label Format 2397 Registry are described in detail in [21]. The translation used to 2398 interpret the security attribute is not specified as part of the 2399 protocol as it may depend on various factors. The second component 2400 is an opaque section which contains the data of the attribute. This 2401 component is dependent on the MAC model to interpret and enforce. 2403 In particular, it is the responsibility of the LFS specification to 2404 define a maximum size for the opaque section, slai_data<>. When 2405 creating or modifying a label for an object, the client needs to be 2406 guaranteed that the server will accept a label that is sized 2407 correctly. By both client and server being part of a specific MAC 2408 model, the client will be aware of the size. 2410 If a server supports sec_label, then it MUST also support 2411 change_sec_label. Any modification to sec_label MUST modify the 2412 value for change_sec_label. 2414 12.2.3. Attribute 81: change_sec_label 2416 struct change_sec_label4 { 2417 uint64_t csl_major; 2418 uint64_t csl_minor; 2419 }; 2421 The change_sec_label attribute is a read-only attribute per file. If 2422 the value of sec_label for a file is not the same at two disparate 2423 times then the values of change_sec_label at those times MUST be 2424 different as well. The value of change_sec_label MAY change at other 2425 times as well, but this should be rare, as that will require the 2426 client to abort any operation in progress, re-read the label, and 2427 retry the operation. As the sec_label is not bounded by size, this 2428 attribute allows for VERIFY and NVERIFY to quickly determine if the 2429 sec_label has been modified. 2431 12.2.4. Attribute 77: space_reserved 2433 The space_reserve attribute is a read/write attribute of type 2434 boolean. It is a per file attribute and applies during the lifetime 2435 of the file or until it is turned off. When the space_reserved 2436 attribute is set via SETATTR, the server must ensure that there is 2437 disk space to accommodate every byte in the file before it can return 2438 success. If the server cannot guarantee this, it must return 2439 NFS4ERR_NOSPC. 2441 If the client tries to grow a file which has the space_reserved 2442 attribute set, the server must guarantee that there is disk space to 2443 accommodate every byte in the file with the new size before it can 2444 return success. If the server cannot guarantee this, it must return 2445 NFS4ERR_NOSPC. 2447 It is not required that the server allocate the space to the file 2448 before returning success. The allocation can be deferred, however, 2449 it must be guaranteed that it will not fail for lack of space. 2451 The value of space_reserved can be obtained at any time through 2452 GETATTR. If the size is retrieved at the same time, the client can 2453 determine the size of the reservation. 2455 In order to avoid ambiguity, the space_reserve bit cannot be set 2456 along with the size bit in SETATTR. Increasing the size of a file 2457 with space_reserve set will fail if space reservation cannot be 2458 guaranteed for the new size. If the file size is decreased, space 2459 reservation is only guaranteed for the new size. If a hole is 2460 punched into the file, then the reservation is not changed. 2462 12.2.5. Attribute 78: space_freed 2464 space_freed gives the number of bytes freed if the file is deleted. 2465 This attribute is read only and is of type length4. It is a per file 2466 attribute. 2468 13. Operations: REQUIRED, RECOMMENDED, or OPTIONAL 2470 The following tables summarize the operations of the NFSv4.2 protocol 2471 and the corresponding designation of REQUIRED, RECOMMENDED, and 2472 OPTIONAL to implement or either OBSOLESCENT or MUST NOT implement. 2473 The designation of OBSOLESCENT for those operations which are defined 2474 in either NFSv4.0 or NFSv4.1 and are intended to be classified as 2475 MUST NOT be implemented in NFSv4.3. The designation of MUST NOT 2476 implement is reserved for those operations that were defined in 2477 either NFSv4.0 or NFSV4.1 and MUST NOT be implemented in NFSv4.2. 2479 For the most part, the REQUIRED, RECOMMENDED, or OPTIONAL designation 2480 for operations sent by the client is for the server implementation. 2481 The client is generally required to implement the operations needed 2482 for the operating environment for which it serves. For example, a 2483 read-only NFSv4.2 client would have no need to implement the WRITE 2484 operation and is not required to do so. 2486 The REQUIRED or OPTIONAL designation for callback operations sent by 2487 the server is for both the client and server. Generally, the client 2488 has the option of creating the backchannel and sending the operations 2489 on the fore channel that will be a catalyst for the server sending 2490 callback operations. A partial exception is CB_RECALL_SLOT; the only 2491 way the client can avoid supporting this operation is by not creating 2492 a backchannel. 2494 Since this is a summary of the operations and their designation, 2495 there are subtleties that are not presented here. Therefore, if 2496 there is a question of the requirements of implementation, the 2497 operation descriptions themselves must be consulted along with other 2498 relevant explanatory text within this either specification or that of 2499 NFSv4.1 [1]. 2501 The abbreviations used in the second and third columns of the table 2502 are defined as follows. 2504 REQ REQUIRED to implement 2506 REC RECOMMENDED to implement 2508 OPT OPTIONAL to implement 2510 MNI MUST NOT implement 2512 OBS Also OBSOLESCENT for future versions. 2514 For the NFSv4.2 features that are OPTIONAL, the operations that 2515 support those features are OPTIONAL, and the server would return 2516 NFS4ERR_NOTSUPP in response to the client's use of those operations. 2517 If an OPTIONAL feature is supported, it is possible that a set of 2518 operations related to the feature become REQUIRED to implement. The 2519 third column of the table designates the feature(s) and if the 2520 operation is REQUIRED or OPTIONAL in the presence of support for the 2521 feature. 2523 The OPTIONAL features identified and their abbreviations are as 2524 follows: 2526 pNFS Parallel NFS 2528 FDELG File Delegations 2530 DDELG Directory Delegations 2532 COPY Server Side Copy 2533 ADH Application Data Holes 2535 Operations 2537 +----------------------+---------------------+----------------------+ 2538 | Operation | EOL, REQ, REC, OPT, | Feature (REQ, REC, | 2539 | | or MNI | or OPT) | 2540 +----------------------+---------------------+----------------------+ 2541 | ACCESS | REQ | | 2542 | BACKCHANNEL_CTL | REQ | | 2543 | BIND_CONN_TO_SESSION | REQ | | 2544 | CLOSE | REQ | | 2545 | COMMIT | REQ | | 2546 | COPY | OPT | COPY (REQ) | 2547 | OFFLOAD_ABORT | OPT | COPY (REQ) | 2548 | COPY_NOTIFY | OPT | COPY (REQ) | 2549 | OFFLOAD_REVOKE | OPT | COPY (REQ) | 2550 | OFFLOAD_STATUS | OPT | COPY (REQ) | 2551 | CREATE | REQ | | 2552 | CREATE_SESSION | REQ | | 2553 | DELEGPURGE | OPT | FDELG (REQ) | 2554 | DELEGRETURN | OPT | FDELG, DDELG, pNFS | 2555 | | | (REQ) | 2556 | DESTROY_CLIENTID | REQ | | 2557 | DESTROY_SESSION | REQ | | 2558 | EXCHANGE_ID | REQ | | 2559 | FREE_STATEID | REQ | | 2560 | GETATTR | REQ | | 2561 | GETDEVICEINFO | OPT | pNFS (REQ) | 2562 | GETDEVICELIST | OPT | pNFS (OPT) | 2563 | GETFH | REQ | | 2564 | WRITE_PLUS | OPT | ADH (REQ) | 2565 | GET_DIR_DELEGATION | OPT | DDELG (REQ) | 2566 | LAYOUTCOMMIT | OPT | pNFS (REQ) | 2567 | LAYOUTGET | OPT | pNFS (REQ) | 2568 | LAYOUTRETURN | OPT | pNFS (REQ) | 2569 | LINK | OPT | | 2570 | LOCK | REQ | | 2571 | LOCKT | REQ | | 2572 | LOCKU | REQ | | 2573 | LOOKUP | REQ | | 2574 | LOOKUPP | REQ | | 2575 | NVERIFY | REQ | | 2576 | OPEN | REQ | | 2577 | OPENATTR | OPT | | 2578 | OPEN_CONFIRM | MNI | | 2579 | OPEN_DOWNGRADE | REQ | | 2580 | PUTFH | REQ | | 2581 | PUTPUBFH | REQ | | 2582 | PUTROOTFH | REQ | | 2583 | READ | REQ (OBS) | | 2584 | READDIR | REQ | | 2585 | READLINK | OPT | | 2586 | READ_PLUS | OPT | ADH (REQ) | 2587 | RECLAIM_COMPLETE | REQ | | 2588 | RELEASE_LOCKOWNER | MNI | | 2589 | REMOVE | REQ | | 2590 | RENAME | REQ | | 2591 | RENEW | MNI | | 2592 | RESTOREFH | REQ | | 2593 | SAVEFH | REQ | | 2594 | SECINFO | REQ | | 2595 | SECINFO_NO_NAME | REC | pNFS file layout | 2596 | | | (REQ) | 2597 | SEQUENCE | REQ | | 2598 | SETATTR | REQ | | 2599 | SETCLIENTID | MNI | | 2600 | SETCLIENTID_CONFIRM | MNI | | 2601 | SET_SSV | REQ | | 2602 | TEST_STATEID | REQ | | 2603 | VERIFY | REQ | | 2604 | WANT_DELEGATION | OPT | FDELG (OPT) | 2605 | WRITE | REQ (OBS) | | 2606 +----------------------+---------------------+----------------------+ 2608 Callback Operations 2610 +-------------------------+-------------------+---------------------+ 2611 | Operation | REQ, REC, OPT, or | Feature (REQ, REC, | 2612 | | MNI | or OPT) | 2613 +-------------------------+-------------------+---------------------+ 2614 | CB_OFFLOAD | OPT | COPY (REQ) | 2615 | CB_GETATTR | OPT | FDELG (REQ) | 2616 | CB_LAYOUTRECALL | OPT | pNFS (REQ) | 2617 | CB_NOTIFY | OPT | DDELG (REQ) | 2618 | CB_NOTIFY_DEVICEID | OPT | pNFS (OPT) | 2619 | CB_NOTIFY_LOCK | OPT | | 2620 | CB_PUSH_DELEG | OPT | FDELG (OPT) | 2621 | CB_RECALL | OPT | FDELG, DDELG, pNFS | 2622 | | | (REQ) | 2623 | CB_RECALL_ANY | OPT | FDELG, DDELG, pNFS | 2624 | | | (REQ) | 2625 | CB_RECALL_SLOT | REQ | | 2626 | CB_RECALLABLE_OBJ_AVAIL | OPT | DDELG, pNFS (REQ) | 2627 | CB_SEQUENCE | OPT | FDELG, DDELG, pNFS | 2628 | | | (REQ) | 2629 | CB_WANTS_CANCELLED | OPT | FDELG, DDELG, pNFS | 2630 | | | (REQ) | 2631 +-------------------------+-------------------+---------------------+ 2633 14. NFSv4.2 Operations 2635 14.1. Operation 59: COPY - Initiate a server-side copy 2637 14.1.1. ARGUMENT 2639 const COPY4_GUARDED = 0x00000001; 2640 const COPY4_METADATA = 0x00000002; 2642 struct COPY4args { 2643 /* SAVED_FH: source file */ 2644 /* CURRENT_FH: destination file or */ 2645 /* directory */ 2646 stateid4 ca_src_stateid; 2647 stateid4 ca_dst_stateid; 2648 offset4 ca_src_offset; 2649 offset4 ca_dst_offset; 2650 length4 ca_count; 2651 uint32_t ca_flags; 2652 component4 ca_destination; 2653 netloc4 ca_source_server<>; 2654 }; 2656 14.1.2. RESULT 2658 union COPY4res switch (nfsstat4 cr_status) { 2659 case NFS4_OK: 2660 write_response4 resok4; 2661 default: 2662 length4 cr_bytes_copied; 2663 }; 2665 14.1.3. DESCRIPTION 2667 The COPY operation is used for both intra-server and inter-server 2668 copies. In both cases, the COPY is always sent from the client to 2669 the destination server of the file copy. The COPY operation requests 2670 that a file be copied from the location specified by the SAVED_FH 2671 value to the location specified by the combination of CURRENT_FH and 2672 ca_destination. 2674 The SAVED_FH must be a regular file. If SAVED_FH is not a regular 2675 file, the operation MUST fail and return NFS4ERR_WRONG_TYPE. 2677 In order to set SAVED_FH to the source file handle, the compound 2678 procedure requesting the COPY will include a sub-sequence of 2679 operations such as 2681 PUTFH source-fh 2682 SAVEFH 2684 If the request is for a server-to-server copy, the source-fh is a 2685 filehandle from the source server and the compound procedure is being 2686 executed on the destination server. In this case, the source-fh is a 2687 foreign filehandle on the server receiving the COPY request. If 2688 either PUTFH or SAVEFH checked the validity of the filehandle, the 2689 operation would likely fail and return NFS4ERR_STALE. 2691 If a server supports the server-to-server COPY feature, a PUTFH 2692 followed by a SAVEFH MUST NOT return NFS4ERR_STALE for either 2693 operation. These restrictions do not pose substantial difficulties 2694 for servers. The CURRENT_FH and SAVED_FH may be validated in the 2695 context of the operation referencing them and an NFS4ERR_STALE error 2696 returned for an invalid file handle at that point. 2698 For an intra-server copy, both the ca_src_stateid and ca_dst_stateid 2699 MUST refer to either open or locking states provided earlier by the 2700 server. If either stateid is invalid, then the operation MUST fail. 2701 If the request is for a inter-server copy, then the ca_src_stateid 2702 can be ignored. If ca_dst_stateid is invalid, then the operation 2703 MUST fail. 2705 The CURRENT_FH and ca_destination together specify the destination of 2706 the copy operation. If ca_destination is of 0 (zero) length, then 2707 CURRENT_FH specifies the target file. In this case, CURRENT_FH MUST 2708 be a regular file and not a directory. If ca_destination is not of 0 2709 (zero) length, the ca_destination argument specifies the file name to 2710 which the data will be copied within the directory identified by 2711 CURRENT_FH. In this case, CURRENT_FH MUST be a directory and not a 2712 regular file. 2714 If the file named by ca_destination does not exist and the operation 2715 completes successfully, the file will be visible in the file system 2716 namespace. If the file does not exist and the operation fails, the 2717 file MAY be visible in the file system namespace depending on when 2718 the failure occurs and on the implementation of the NFS server 2719 receiving the COPY operation. If the ca_destination name cannot be 2720 created in the destination file system (due to file name 2721 restrictions, such as case or length), the operation MUST fail. 2723 The ca_src_offset is the offset within the source file from which the 2724 data will be read, the ca_dst_offset is the offset within the 2725 destination file to which the data will be written, and the ca_count 2726 is the number of bytes that will be copied. An offset of 0 (zero) 2727 specifies the start of the file. A count of 0 (zero) requests that 2728 all bytes from ca_src_offset through EOF be copied to the 2729 destination. If concurrent modifications to the source file overlap 2730 with the source file region being copied, the data copied may include 2731 all, some, or none of the modifications. The client can use standard 2732 NFS operations (e.g., OPEN with OPEN4_SHARE_DENY_WRITE or mandatory 2733 byte range locks) to protect against concurrent modifications if the 2734 client is concerned about this. If the source file's end of file is 2735 being modified in parallel with a copy that specifies a count of 0 2736 (zero) bytes, the amount of data copied is implementation dependent 2737 (clients may guard against this case by specifying a non-zero count 2738 value or preventing modification of the source file as mentioned 2739 above). 2741 If the source offset or the source offset plus count is greater than 2742 or equal to the size of the source file, the operation will fail with 2743 NFS4ERR_INVAL. The destination offset or destination offset plus 2744 count may be greater than the size of the destination file. This 2745 allows for the client to issue parallel copies to implement 2746 operations such as "cat file1 file2 file3 file4 > dest". 2748 If the destination file is created as a result of this command, the 2749 destination file's size will be equal to the number of bytes 2750 successfully copied. If the destination file already existed, the 2751 destination file's size may increase as a result of this operation 2752 (e.g. if ca_dst_offset plus ca_count is greater than the 2753 destination's initial size). 2755 If the ca_source_server list is specified, then this is an inter- 2756 server copy operation and the source file is on a remote server. The 2757 client is expected to have previously issued a successful COPY_NOTIFY 2758 request to the remote source server. The ca_source_server list MUST 2759 be the same as the COPY_NOTIFY response's cnr_source_server list. If 2760 the client includes the entries from the COPY_NOTIFY response's 2761 cnr_source_server list in the ca_source_server list, the source 2762 server can indicate a specific copy protocol for the destination 2763 server to use by returning a URL, which specifies both a protocol 2764 service and server name. Server-to-server copy protocol 2765 considerations are described in Section 3.2.5 and Section 3.4.1. 2767 The ca_flags argument allows the copy operation to be customized in 2768 the following ways using the guarded flag (COPY4_GUARDED) and the 2769 metadata flag (COPY4_METADATA). 2771 If the guarded flag is set and the destination exists on the server, 2772 this operation will fail with NFS4ERR_EXIST. 2774 If the guarded flag is not set and the destination exists on the 2775 server, the behavior is implementation dependent. 2777 If the metadata flag is set and the client is requesting a whole file 2778 copy (i.e., ca_count is 0 (zero)), a subset of the destination file's 2779 attributes MUST be the same as the source file's corresponding 2780 attributes and a subset of the destination file's attributes SHOULD 2781 be the same as the source file's corresponding attributes. The 2782 attributes in the MUST and SHOULD copy subsets will be defined for 2783 each NFS version. 2785 For NFSv4.2, Table 5 and Table 6 list the REQUIRED and RECOMMENDED 2786 attributes respectively. In the "Copy to destination file?" column, 2787 a "MUST" indicates that the attribute is part of the MUST copy set. 2788 A "SHOULD" indicates that the attribute is part of the SHOULD copy 2789 set. A "no" indicates that the attribute MUST NOT be copied. 2791 REQUIRED attributes 2793 +--------------------+----+---------------------------+ 2794 | Name | Id | Copy to destination file? | 2795 +--------------------+----+---------------------------+ 2796 | supported_attrs | 0 | no | 2797 | type | 1 | MUST | 2798 | fh_expire_type | 2 | no | 2799 | change | 3 | SHOULD | 2800 | size | 4 | MUST | 2801 | link_support | 5 | no | 2802 | symlink_support | 6 | no | 2803 | named_attr | 7 | no | 2804 | fsid | 8 | no | 2805 | unique_handles | 9 | no | 2806 | lease_time | 10 | no | 2807 | rdattr_error | 11 | no | 2808 | filehandle | 19 | no | 2809 | suppattr_exclcreat | 75 | no | 2810 +--------------------+----+---------------------------+ 2812 Table 5 2814 RECOMMENDED attributes 2816 +--------------------+----+---------------------------+ 2817 | Name | Id | Copy to destination file? | 2818 +--------------------+----+---------------------------+ 2819 | acl | 12 | MUST | 2820 | aclsupport | 13 | no | 2821 | archive | 14 | no | 2822 | cansettime | 15 | no | 2823 | case_insensitive | 16 | no | 2824 | case_preserving | 17 | no | 2825 | change_attr_type | 79 | no | 2826 | change_policy | 60 | no | 2827 | chown_restricted | 18 | MUST | 2828 | dacl | 58 | MUST | 2829 | dir_notif_delay | 56 | no | 2830 | dirent_notif_delay | 57 | no | 2831 | fileid | 20 | no | 2832 | files_avail | 21 | no | 2833 | files_free | 22 | no | 2834 | files_total | 23 | no | 2835 | fs_charset_cap | 76 | no | 2836 | fs_layout_type | 62 | no | 2837 | fs_locations | 24 | no | 2838 | fs_locations_info | 67 | no | 2839 | fs_status | 61 | no | 2840 | hidden | 25 | MUST | 2841 | homogeneous | 26 | no | 2842 | layout_alignment | 66 | no | 2843 | layout_blksize | 65 | no | 2844 | layout_hint | 63 | no | 2845 | layout_type | 64 | no | 2846 | maxfilesize | 27 | no | 2847 | maxlink | 28 | no | 2848 | maxname | 29 | no | 2849 | maxread | 30 | no | 2850 | maxwrite | 31 | no | 2851 | mdsthreshold | 68 | no | 2852 | mimetype | 32 | MUST | 2853 | mode | 33 | MUST | 2854 | mode_set_masked | 74 | no | 2855 | mounted_on_fileid | 55 | no | 2856 | no_trunc | 34 | no | 2857 | numlinks | 35 | no | 2858 | owner | 36 | MUST | 2859 | owner_group | 37 | MUST | 2860 | quota_avail_hard | 38 | no | 2861 | quota_avail_soft | 39 | no | 2862 | quota_used | 40 | no | 2863 | rawdev | 41 | no | 2864 | retentevt_get | 71 | MUST | 2865 | retentevt_set | 72 | no | 2866 | retention_get | 69 | MUST | 2867 | retention_hold | 73 | MUST | 2868 | retention_set | 70 | no | 2869 | sacl | 59 | MUST | 2870 | sec_label | 80 | MUST | 2871 | space_avail | 42 | no | 2872 | space_free | 43 | no | 2873 | space_freed | 78 | no | 2874 | space_reserved | 77 | MUST | 2875 | space_total | 44 | no | 2876 | space_used | 45 | no | 2877 | system | 46 | MUST | 2878 | time_access | 47 | MUST | 2879 | time_access_set | 48 | no | 2880 | time_backup | 49 | no | 2881 | time_create | 50 | MUST | 2882 | time_delta | 51 | no | 2883 | time_metadata | 52 | SHOULD | 2884 | time_modify | 53 | MUST | 2885 | time_modify_set | 54 | no | 2886 +--------------------+----+---------------------------+ 2888 Table 6 2890 [NOTE: The source file's attribute values will take precedence over 2891 any attribute values inherited by the destination file.] 2893 In the case of an inter-server copy or an intra-server copy between 2894 file systems, the attributes supported for the source file and 2895 destination file could be different. By definition,the REQUIRED 2896 attributes will be supported in all cases. If the metadata flag is 2897 set and the source file has a RECOMMENDED attribute that is not 2898 supported for the destination file, the copy MUST fail with 2899 NFS4ERR_ATTRNOTSUPP. 2901 Any attribute supported by the destination server that is not set on 2902 the source file SHOULD be left unset. 2904 Metadata attributes not exposed via the NFS protocol SHOULD be copied 2905 to the destination file where appropriate. 2907 The destination file's named attributes are not duplicated from the 2908 source file. After the copy process completes, the client MAY 2909 attempt to duplicate named attributes using standard NFSv4 2910 operations. However, the destination file's named attribute 2911 capabilities MAY be different from the source file's named attribute 2912 capabilities. 2914 If the metadata flag is not set and the client is requesting a whole 2915 file copy (i.e., ca_count is 0 (zero)), the destination file's 2916 metadata is implementation dependent. 2918 If the client is requesting a partial file copy (i.e., ca_count is 2919 not 0 (zero)), the client SHOULD NOT set the metadata flag and the 2920 server MUST ignore the metadata flag. 2922 If the operation does not result in an immediate failure, the server 2923 will return NFS4_OK, and the CURRENT_FH will remain the destination's 2924 filehandle. 2926 If an immediate failure does occur, cr_bytes_copied will be set to 2927 the number of bytes copied to the destination file before the error 2928 occurred. The cr_bytes_copied value indicates the number of bytes 2929 copied but not which specific bytes have been copied. 2931 A return of NFS4_OK indicates that either the operation is complete 2932 or the operation was initiated and a callback will be used to deliver 2933 the final status of the operation. 2935 If the cr_callback_id is returned, this indicates that the operation 2936 was initiated and a CB_OFFLOAD callback will deliver the final 2937 results of the operation. The cr_callback_id stateid is termed a 2938 copy stateid in this context. The server is given the option of 2939 returning the results in a callback because the data may require a 2940 relatively long period of time to copy. 2942 If no cr_callback_id is returned, the operation completed 2943 synchronously and no callback will be issued by the server. The 2944 completion status of the operation is indicated by cr_status. 2946 If the copy completes successfully, either synchronously or 2947 asynchronously, the data copied from the source file to the 2948 destination file MUST appear identical to the NFS client. However, 2949 the NFS server's on disk representation of the data in the source 2950 file and destination file MAY differ. For example, the NFS server 2951 might encrypt, compress, deduplicate, or otherwise represent the on 2952 disk data in the source and destination file differently. 2954 14.2. Operation 60: OFFLOAD_ABORT - Cancel a server-side copy 2955 14.2.1. ARGUMENT 2957 struct OFFLOAD_ABORT4args { 2958 /* CURRENT_FH: destination file */ 2959 stateid4 oaa_stateid; 2960 }; 2962 14.2.2. RESULT 2964 struct OFFLOAD_ABORT4res { 2965 nfsstat4 oar_status; 2966 }; 2968 14.2.3. DESCRIPTION 2970 OFFLOAD_ABORT is used for both intra- and inter-server asynchronous 2971 copies. The OFFLOAD_ABORT operation allows the client to cancel a 2972 server-side copy operation that it initiated. This operation is sent 2973 in a COMPOUND request from the client to the destination server. 2974 This operation may be used to cancel a copy when the application that 2975 requested the copy exits before the operation is completed or for 2976 some other reason. 2978 The request contains the filehandle and copy stateid cookies that act 2979 as the context for the previously initiated copy operation. 2981 The result's oar_status field indicates whether the cancel was 2982 successful or not. A value of NFS4_OK indicates that the copy 2983 operation was canceled and no callback will be issued by the server. 2984 A copy operation that is successfully canceled may result in none, 2985 some, or all of the data and/or metadata copied. 2987 If the server supports asynchronous copies, the server is REQUIRED to 2988 support the OFFLOAD_ABORT operation. 2990 14.3. Operation 61: COPY_NOTIFY - Notify a source server of a future 2991 copy 2993 14.3.1. ARGUMENT 2995 struct COPY_NOTIFY4args { 2996 /* CURRENT_FH: source file */ 2997 stateid4 cna_src_stateid; 2998 netloc4 cna_destination_server; 2999 }; 3001 14.3.2. RESULT 3003 struct COPY_NOTIFY4resok { 3004 nfstime4 cnr_lease_time; 3005 netloc4 cnr_source_server<>; 3006 }; 3008 union COPY_NOTIFY4res switch (nfsstat4 cnr_status) { 3009 case NFS4_OK: 3010 COPY_NOTIFY4resok resok4; 3011 default: 3012 void; 3013 }; 3015 14.3.3. DESCRIPTION 3017 This operation is used for an inter-server copy. A client sends this 3018 operation in a COMPOUND request to the source server to authorize a 3019 destination server identified by cna_destination_server to read the 3020 file specified by CURRENT_FH on behalf of the given user. 3022 The cna_src_stateid MUST refer to either open or locking states 3023 provided earlier by the server. If it is invalid, then the operation 3024 MUST fail. 3026 The cna_destination_server MUST be specified using the netloc4 3027 network location format. The server is not required to resolve the 3028 cna_destination_server address before completing this operation. 3030 If this operation succeeds, the source server will allow the 3031 cna_destination_server to copy the specified file on behalf of the 3032 given user as long as both of the following conditions are met: 3034 o The destination server begins reading the source file before the 3035 cnr_lease_time expires. If the cnr_lease_time expires while the 3036 destination server is still reading the source file, the 3037 destination server is allowed to finish reading the file. 3039 o The client has not issued a COPY_REVOKE for the same combination 3040 of user, filehandle, and destination server. 3042 The cnr_lease_time is chosen by the source server. A cnr_lease_time 3043 of 0 (zero) indicates an infinite lease. To avoid the need for 3044 synchronized clocks, copy lease times are granted by the server as a 3045 time delta. To renew the copy lease time the client should resend 3046 the same copy notification request to the source server. 3048 A successful response will also contain a list of netloc4 network 3049 location formats called cnr_source_server, on which the source is 3050 willing to accept connections from the destination. These might not 3051 be reachable from the client and might be located on networks to 3052 which the client has no connection. 3054 If the client wishes to perform an inter-server copy, the client MUST 3055 send a COPY_NOTIFY to the source server. Therefore, the source 3056 server MUST support COPY_NOTIFY. 3058 For a copy only involving one server (the source and destination are 3059 on the same server), this operation is unnecessary. 3061 14.4. Operation 62: OFFLOAD_REVOKE - Revoke a destination server's copy 3062 privileges 3064 14.4.1. ARGUMENT 3066 struct OFFLOAD_REVOKE4args { 3067 /* CURRENT_FH: source file */ 3068 netloc4 ora_destination_server; 3069 }; 3071 14.4.2. RESULT 3073 struct OFFLOAD_REVOKE4res { 3074 nfsstat4 orr_status; 3075 }; 3077 14.4.3. DESCRIPTION 3079 This operation is used for an inter-server copy. A client sends this 3080 operation in a COMPOUND request to the source server to revoke the 3081 authorization of a destination server identified by 3082 ora_destination_server from reading the file specified by CURRENT_FH 3083 on behalf of given user. If the ora_destination_server has already 3084 begun copying the file, a successful return from this operation 3085 indicates that further access will be prevented. 3087 The ora_destination_server MUST be specified using the netloc4 3088 network location format. The server is not required to resolve the 3089 ora_destination_server address before completing this operation. 3091 The client uses OFFLOAD_ABORT to inform the destination to stop the 3092 active transfer and OFFLOAD_REVOKE to inform the source to not allow 3093 any more copy requests from the destination. The OFFLOAD_REVOKE 3094 operation is also useful in situations in which the source server 3095 granted a very long or infinite lease on the destination server's 3096 ability to read the source file and all copy operations on the source 3097 file have been completed. 3099 For a copy only involving one server (the source and destination are 3100 on the same server), this operation is unnecessary. 3102 If the server supports COPY_NOTIFY, the server is REQUIRED to support 3103 the OFFLOAD_REVOKE operation. 3105 14.5. Operation 63: OFFLOAD_STATUS - Poll for status of a server-side 3106 copy 3108 14.5.1. ARGUMENT 3110 struct OFFLOAD_STATUS4args { 3111 /* CURRENT_FH: destination file */ 3112 stateid4 osa_stateid; 3113 }; 3115 14.5.2. RESULT 3117 struct OFFLOAD_STATUS4resok { 3118 length4 osr_bytes_copied; 3119 nfsstat4 osr_complete<1>; 3120 }; 3122 union OFFLOAD_STATUS4res switch (nfsstat4 osr_status) { 3123 case NFS4_OK: 3124 OFFLOAD_STATUS4resok osr_resok4; 3125 default: 3126 void; 3127 }; 3129 14.5.3. DESCRIPTION 3131 OFFLOAD_STATUS is used for both intra- and inter-server asynchronous 3132 copies. The OFFLOAD_STATUS operation allows the client to poll the 3133 destination server to determine the status of an asynchronous copy 3134 operation. 3136 If this operation is successful, the number of bytes copied are 3137 returned to the client in the osr_bytes_copied field. The 3138 osr_bytes_copied value indicates the number of bytes copied but not 3139 which specific bytes have been copied. 3141 If the optional osr_complete field is present, the copy has 3142 completed. In this case the status value indicates the result of the 3143 asynchronous copy operation. In all cases, the server will also 3144 deliver the final results of the asynchronous copy in a CB_OFFLOAD 3145 operation. 3147 The failure of this operation does not indicate the result of the 3148 asynchronous copy in any way. 3150 If the server supports asynchronous copies, the server is REQUIRED to 3151 support the OFFLOAD_STATUS operation. 3153 14.6. Modification to Operation 42: EXCHANGE_ID - Instantiate Client ID 3155 14.6.1. ARGUMENT 3157 /* new */ 3158 const EXCHGID4_FLAG_SUPP_FENCE_OPS = 0x00000004; 3160 14.6.2. RESULT 3162 Unchanged 3164 14.6.3. MOTIVATION 3166 Enterprise applications require guarantees that an operation has 3167 either aborted or completed. NFSv4.1 provides this guarantee as long 3168 as the session is alive: simply send a SEQUENCE operation on the same 3169 slot with a new sequence number, and the successful return of 3170 SEQUENCE indicates the previous operation has completed. However, if 3171 the session is lost, there is no way to know when any in progress 3172 operations have aborted or completed. In hindsight, the NFSv4.1 3173 specification should have mandated that DESTROY_SESSION either abort 3174 or complete all outstanding operations. 3176 14.6.4. DESCRIPTION 3178 A client SHOULD request the EXCHGID4_FLAG_SUPP_FENCE_OPS capability 3179 when it sends an EXCHANGE_ID operation. The server SHOULD set this 3180 capability in the EXCHANGE_ID reply whether the client requests it or 3181 not. It is the server's return that determines whether this 3182 capability is in effect. When it is in effect, the following will 3183 occur: 3185 o The server will not reply to any DESTROY_SESSION invoked with the 3186 client ID until all operations in progress are completed or 3187 aborted. 3189 o The server will not reply to subsequent EXCHANGE_ID invoked on the 3190 same client owner with a new verifier until all operations in 3191 progress on the client ID's session are completed or aborted. 3193 o The NFS server SHOULD support client ID trunking, and if it does 3194 and the EXCHGID4_FLAG_SUPP_FENCE_OPS capability is enabled, then a 3195 session ID created on one node of the storage cluster MUST be 3196 destroyable via DESTROY_SESSION. In addition, DESTROY_CLIENTID 3197 and an EXCHANGE_ID with a new verifier affects all sessions 3198 regardless what node the sessions were created on. 3200 14.7. Operation 64: WRITE_PLUS 3202 14.7.1. ARGUMENT 3204 struct data_info4 { 3205 offset4 di_offset; 3206 length4 di_length; 3207 bool di_allocated; 3208 }; 3210 struct data4 { 3211 offset4 d_offset; 3212 bool d_allocated; 3213 opaque d_data<>; 3214 }; 3215 union write_plus_arg4 switch (data_content4 wpa_content) { 3216 case NFS4_CONTENT_DATA: 3217 data4 wpa_data; 3218 case NFS4_CONTENT_APP_DATA_HOLE: 3219 app_data_hole4 wpa_adh; 3220 case NFS4_CONTENT_HOLE: 3221 data_info4 wpa_hole; 3222 default: 3223 void; 3224 }; 3226 struct WRITE_PLUS4args { 3227 /* CURRENT_FH: file */ 3228 stateid4 wp_stateid; 3229 stable_how4 wp_stable; 3230 write_plus_arg4 wp_data<>; 3231 }; 3233 14.7.2. RESULT 3235 struct write_response4 { 3236 stateid4 wr_callback_id<1>; 3237 count4 wr_count; 3238 stable_how4 wr_committed; 3239 verifier4 wr_writeverf; 3240 }; 3242 union WRITE_PLUS4res switch (nfsstat4 wp_status) { 3243 case NFS4_OK: 3244 write_response4 wp_resok4; 3245 default: 3246 void; 3247 }; 3249 14.7.3. DESCRIPTION 3251 The WRITE_PLUS operation is an extension of the NFSv4.1 WRITE 3252 operation (see Section 18.2 of [1] and writes data to the regular 3253 file identified by the current filehandle. The server MAY write 3254 fewer bytes than requested by the client. 3256 The WRITE_PLUS argument is comprised of an array of rpr_contents, 3257 each of which describe a data_content4 type of data (Section 7.1.2). 3259 For NFSv4.2, the allowed values are data, ADH, and hole. The array 3260 contents MUST be contiguous in the file. A successful WRITE_PLUS 3261 will construct a reply for wr_count, wr_committed, and wr_writeverf 3262 as per the NFSv4.1 WRITE operation results. If wr_callback_id is 3263 set, it indicates an asynchronous reply (see Section 14.7.3.4). 3265 WRITE_PLUS has to support all of the errors which are returned by 3266 WRITE plus NFS4ERR_UNION_NOTSUPP. If the client asks for a hole and 3267 the server does not support that arm of the discriminated union, but 3268 does support one or more additional arms, it can signal to the client 3269 that it supports the operation, but not the arm with 3270 NFS4ERR_UNION_NOTSUPP. 3272 If the client supports WRITE_PLUS and any arm of the discriminated 3273 union other than NFS4_CONTENT_DATA, it MUST support CB_OFFLOAD. 3275 14.7.3.1. Data 3277 The d_offset specifies the offset where the data should be written. 3278 An d_offset of zero specifies that the write should start at the 3279 beginning of the file. The d_count, as encoded as part of the opaque 3280 data parameter, represents the number of bytes of data that are to be 3281 written. If the d_count is zero, the WRITE_PLUS will succeed and 3282 return a d_count of zero subject to permissions checking. 3284 Note that d_allocated has no meaning for WRITE_PLUS. 3286 The data MUST be written synchronously and MUST follow the same 3287 semantics of COMMIT as does the WRITE operation. 3289 14.7.3.2. Hole punching 3291 Whenever a client wishes to zero the blocks backing a particular 3292 region in the file, it calls the WRITE_PLUS operation with the 3293 current filehandle set to the filehandle of the file in question, and 3294 the equivalent of start offset and length in bytes of the region set 3295 in wpa_hole.di_offset and wpa_hole.di_length respectively. If the 3296 wpa_hole.di_allocated is set to TRUE, then the blocks will be zeroed 3297 and if it is set to FALSE, then they will be deallocated. All 3298 further reads to this region MUST return zeros until overwritten. 3299 The filehandle specified must be that of a regular file. 3301 Situations may arise where di_offset and/or di_offset + di_length 3302 will not be aligned to a boundary that the server does allocations/ 3303 deallocations in. For most file systems, this is the block size of 3304 the file system. In such a case, the server can deallocate as many 3305 bytes as it can in the region. The blocks that cannot be deallocated 3306 MUST be zeroed. Except for the block deallocation and maximum hole 3307 punching capability, a WRITE_PLUS operation is to be treated similar 3308 to a write of zeroes. 3310 The server is not required to complete deallocating the blocks 3311 specified in the operation before returning. The server SHOULD 3312 return an asynchronous result if it can determine the operation will 3313 be long running (see Section 14.7.3.4). 3315 If used to hole punch, WRITE_PLUS will result in the space_used 3316 attribute being decreased by the number of bytes that were 3317 deallocated. The space_freed attribute may or may not decrease, 3318 depending on the support and whether the blocks backing the specified 3319 range were shared or not. The size attribute will remain unchanged. 3321 The WRITE_PLUS operation MUST NOT change the space reservation 3322 guarantee of the file. While the server can deallocate the blocks 3323 specified by di_offset and di_length, future writes to this region 3324 MUST NOT fail with NFSERR_NOSPC. 3326 14.7.3.3. ADHs 3328 If the server supports ADHs, then it MUST support the 3329 NFS4_CONTENT_APP_DATA_HOLE arm of the WRITE_PLUS operation. The 3330 server has no concept of the structure imposed by the application. 3331 It is only when the application writes to a section of the file does 3332 order get imposed. In order to detect corruption even before the 3333 application utilizes the file, the application will want to 3334 initialize a range of ADHs using WRITE_PLUS. 3336 For ADHs, when the client invokes the WRITE_PLUS operation, it has 3337 two desired results: 3339 1. The structure described by the app_data_block4 be imposed on the 3340 file. 3342 2. The contents described by the app_data_block4 be sparse. 3344 If the server supports the WRITE_PLUS operation, it still might not 3345 support sparse files. So if it receives the WRITE_PLUS operation, 3346 then it MUST populate the contents of the file with the initialized 3347 ADHs. The server SHOULD return an asynchronous result if it can 3348 determine the operation will be long running (see Section 14.7.3.4). 3350 If the data was already initialized, there are two interesting 3351 scenarios: 3353 1. The data blocks are allocated. 3355 2. Initializing in the middle of an existing ADH. 3357 If the data blocks were already allocated, then the WRITE_PLUS is a 3358 hole punch operation. If WRITE_PLUS supports sparse files, then the 3359 data blocks are to be deallocated. If not, then the data blocks are 3360 to be rewritten in the indicated ADH format. 3362 Since the server has no knowledge of ADHs, it should not report 3363 misaligned creation of ADHs. Even while it can detect them, it 3364 cannot disallow them, as the application might be in the process of 3365 changing the size of the ADHs. Thus the server must be prepared to 3366 handle an WRITE_PLUS into an existing ADH. 3368 This document does not mandate the manner in which the server stores 3369 ADHs sparsely for a file. However, if an WRITE_PLUS arrives that 3370 will force a new ADH to start inside an existing ADH then the server 3371 will have three ADHs instead of two. It will have one up to the new 3372 one for the WRITE_PLUS, one for the WRITE_PLUS, and one for after the 3373 WRITE_PLUS. Note that depending on server specific policies for 3374 block allocation, there may also be some physical blocks allocated to 3375 align the boundaries. 3377 14.7.3.4. Asynchronous Transactions 3379 Both hole punching and ADH initialization may lead to server 3380 determining to service the operation asynchronously. If it decides 3381 to do so, it sets the stateid in wr_callback_id to be that of the 3382 wp_stateid. If it does not set the wr_callback_id, then the result 3383 is synchronous. 3385 When the client determines that the reply will be given 3386 asynchronously, it should not assume anything about the contents of 3387 what it wrote until it is informed by the server that the operation 3388 is complete. It can use OFFLOAD_STATUS (Section 14.5) to monitor the 3389 operation and OFFLOAD_ABORT (Section 14.2) to cancel the operation. 3390 An example of a asynchronous WRITE_PLUS is shown in Figure 6. Note 3391 that as with the COPY operation, WRITE_PLUS must provide a stateid 3392 for tracking the asynchronous operation. 3394 Client Server 3395 + + 3396 | | 3397 |--- OPEN ---------------------------->| Client opens 3398 |<------------------------------------/| the file 3399 | | 3400 |--- WRITE_PLUS ---------------------->| Client punches 3401 |<------------------------------------/| a hole 3402 | | 3403 | | 3404 |--- OFFLOAD_STATUS ------------------>| Client may poll 3405 |<------------------------------------/| for status 3406 | | 3407 | . | Multiple OFFLOAD_STATUS 3408 | . | operations may be sent. 3409 | . | 3410 | | 3411 |<-- CB_OFFLOAD -----------------------| Server reports results 3412 |\------------------------------------>| 3413 | | 3414 |--- CLOSE --------------------------->| Client closes 3415 |<------------------------------------/| the file 3416 | | 3417 | | 3419 Figure 6: An asynchronous WRITE_PLUS. 3421 When CB_OFFLOAD informs the client of the successful WRITE_PLUS, the 3422 write_response4 embedded in the operation will provide the necessary 3423 information that a synchronous WRITE_PLUS would have provided. 3425 Regardelss of whether the operation is asynchronous or synchronous, 3426 it MUST still support the COMMIT operation semantics as outlined in 3427 Section 18.3 of [1]. I.e., COMMIT works on one or more WRITE 3428 operations and the WRITE_PLUS operation can appear as several WRITE 3429 operations to the server. The client can use locking operations to 3430 control the behavior on the server with respect to a long running 3431 asynchornous write operations. 3433 14.8. Operation 67: IO_ADVISE - Application I/O access pattern hints 3434 14.8.1. ARGUMENT 3436 enum IO_ADVISE_type4 { 3437 IO_ADVISE4_NORMAL = 0, 3438 IO_ADVISE4_SEQUENTIAL = 1, 3439 IO_ADVISE4_SEQUENTIAL_BACKWARDS = 2, 3440 IO_ADVISE4_RANDOM = 3, 3441 IO_ADVISE4_WILLNEED = 4, 3442 IO_ADVISE4_WILLNEED_OPPORTUNISTIC = 5, 3443 IO_ADVISE4_DONTNEED = 6, 3444 IO_ADVISE4_NOREUSE = 7, 3445 IO_ADVISE4_READ = 8, 3446 IO_ADVISE4_WRITE = 9, 3447 IO_ADVISE4_INIT_PROXIMITY = 10 3448 }; 3450 struct IO_ADVISE4args { 3451 /* CURRENT_FH: file */ 3452 stateid4 iar_stateid; 3453 offset4 iar_offset; 3454 length4 iar_count; 3455 bitmap4 iar_hints; 3456 }; 3458 14.8.2. RESULT 3460 struct IO_ADVISE4resok { 3461 bitmap4 ior_hints; 3462 }; 3464 union IO_ADVISE4res switch (nfsstat4 _status) { 3465 case NFS4_OK: 3466 IO_ADVISE4resok resok4; 3467 default: 3468 void; 3469 }; 3471 14.8.3. DESCRIPTION 3473 The IO_ADVISE operation sends an I/O access pattern hint to the 3474 server for the owner of the stateid for a given byte range specified 3475 by iar_offset and iar_count. The byte range specified by iar_offset 3476 and iar_count need not currently exist in the file, but the iar_hints 3477 will apply to the byte range when it does exist. If iar_count is 0, 3478 all data following iar_offset is specified. The server MAY ignore 3479 the advice. 3481 The following are the allowed hints for a stateid holder: 3483 IO_ADVISE4_NORMAL There is no advice to give, this is the default 3484 behavior. 3486 IO_ADVISE4_SEQUENTIAL Expects to access the specified data 3487 sequentially from lower offsets to higher offsets. 3489 IO_ADVISE4_SEQUENTIAL_BACKWARDS Expects to access the specified data 3490 sequentially from higher offsets to lower offsets. 3492 IO_ADVISE4_RANDOM Expects to access the specified data in a random 3493 order. 3495 IO_ADVISE4_WILLNEED Expects to access the specified data in the near 3496 future. 3498 IO_ADVISE4_WILLNEED_OPPORTUNISTIC Expects to possibly access the 3499 data in the near future. This is a speculative hint, and 3500 therefore the server should prefetch data or indirect blocks only 3501 if it can be done at a marginal cost. 3503 IO_ADVISE_DONTNEED Expects that it will not access the specified 3504 data in the near future. 3506 IO_ADVISE_NOREUSE Expects to access the specified data once and then 3507 not reuse it thereafter. 3509 IO_ADVISE4_READ Expects to read the specified data in the near 3510 future. 3512 IO_ADVISE4_WRITE Expects to write the specified data in the near 3513 future. 3515 IO_ADVISE4_INIT_PROXIMITY Informs the server that the data in the 3516 byte range remains important to the client. 3518 Since IO_ADVISE is a hint, a server SHOULD NOT return an error and 3519 invalidate a entire Compound request if one of the sent hints in 3520 iar_hints is not supported by the server. Also, the server MUST NOT 3521 return an error if the client sends contradictory hints to the 3522 server, e.g., IO_ADVISE4_SEQUENTIAL and IO_ADVISE4_RANDOM in a single 3523 IO_ADVISE operation. In these cases, the server MUST return success 3524 and a ior_hints value that indicates the hint it intends to 3525 implement. This may mean simply returning IO_ADVISE4_NORMAL. 3527 The ior_hints returned by the server is primarily for debugging 3528 purposes since the server is under no obligation to carry out the 3529 hints that it describes in the ior_hints result. In addition, while 3530 the server may have intended to implement the hints returned in 3531 ior_hints, as time progresses, the server may need to change its 3532 handling of a given file due to several reasons including, but not 3533 limited to, memory pressure, additional IO_ADVISE hints sent by other 3534 clients, and heuristically detected file access patterns. 3536 The server MAY return different advice than what the client 3537 requested. If it does, then this might be due to one of several 3538 conditions, including, but not limited to another client advising of 3539 a different I/O access pattern; a different I/O access pattern from 3540 another client that that the server has heuristically detected; or 3541 the server is not able to support the requested I/O access pattern, 3542 perhaps due to a temporary resource limitation. 3544 Each issuance of the IO_ADVISE operation overrides all previous 3545 issuances of IO_ADVISE for a given byte range. This effectively 3546 follows a strategy of last hint wins for a given stateid and byte 3547 range. 3549 Clients should assume that hints included in an IO_ADVISE operation 3550 will be forgotten once the file is closed. 3552 14.8.4. IMPLEMENTATION 3554 The NFS client may choose to issue an IO_ADVISE operation to the 3555 server in several different instances. 3557 The most obvious is in direct response to an application's execution 3558 of posix_fadvise(). In this case, IO_ADVISE4_WRITE and 3559 IO_ADVISE4_READ may be set based upon the type of file access 3560 specified when the file was opened. 3562 14.8.5. IO_ADVISE4_INIT_PROXIMITY 3564 The IO_ADVISE4_INIT_PROXIMITY hint is non-posix in origin and conveys 3565 that the client has recently accessed the byte range in its own 3566 cache. I.e., it has not accessed it on the server, but it has 3567 locally. When the server reaches resource exhaustion, knowing which 3568 data is more important allows the server to make better choices about 3569 which data to, for example purge from a cache, or move to secondary 3570 storage. It also informs the server which delegations are more 3571 important, since if delegations are working correctly, once delegated 3572 to a client and the client has read the content for that byte range, 3573 a server might never receive another read request for that byte 3574 range. 3576 This hint is also useful in the case of NFS clients which are network 3577 booting from a server. If the first client to be booted sends this 3578 hint, then it keeps the cache warm for the remaining clients. 3580 14.8.6. pNFS File Layout Data Type Considerations 3582 The IO_ADVISE considerations for pNFS are very similar to the COMMIT 3583 considerations for pNFS. That is, as with COMMIT, some NFS server 3584 implementations prefer IO_ADVISE be done on the DS, and some prefer 3585 it be done on the MDS. 3587 So for the file's layout type, it is proposed that NFSv4.2 include an 3588 additional hint NFL42_CARE_IO_ADVISE_THRU_MDS which is valid only on 3589 NFSv4.2 or higher. Any file's layout obtained with NFSv4.1 MUST NOT 3590 have NFL42_UFLG_IO_ADVISE_THRU_MDS set. Any file's layout obtained 3591 with NFSv4.2 MAY have NFL42_UFLG_IO_ADVISE_THRU_MDS set. If the 3592 client does not implement IO_ADVISE, then it MUST ignore 3593 NFL42_UFLG_IO_ADVISE_THRU_MDS. 3595 If NFL42_UFLG_IO_ADVISE_THRU_MDS is set, the client MUST send the 3596 IO_ADVISE operation to the MDS in order for it to be honored by the 3597 DS. Once the MDS receives the IO_ADVISE operation, it will 3598 communicate the advice to each DS. 3600 If NFL42_UFLG_IO_ADVISE_THRU_MDS is not set, then the client SHOULD 3601 send an IO_ADVISE operation to the appropriate DS for the specified 3602 byte range. While the client MAY always send IO_ADVISE to the MDS, 3603 if the server has not set NFL42_UFLG_IO_ADVISE_THRU_MDS, the client 3604 should expect that such an IO_ADVISE is futile. Note that a client 3605 SHOULD use the same set of arguments on each IO_ADVISE sent to a DS 3606 for the same open file reference. 3608 The server is not required to support different advice for different 3609 DS's with the same open file reference. 3611 14.8.6.1. Dense and Sparse Packing Considerations 3613 The IO_ADVISE operation MUST use the iar_offset and byte range as 3614 dictated by the presence or absence of NFL4_UFLG_DENSE. 3616 E.g., if NFL4_UFLG_DENSE is present, and a READ or WRITE to the DS 3617 for iar_offset 0 really means iar_offset 10000 in the logical file, 3618 then an IO_ADVISE for iar_offset 0 means iar_offset 10000. 3620 E.g., if NFL4_UFLG_DENSE is absent, then a READ or WRITE to the DS 3621 for iar_offset 0 really means iar_offset 0 in the logical file, then 3622 an IO_ADVISE for iar_offset 0 means iar_offset 0 in the logical file. 3624 E.g., if NFL4_UFLG_DENSE is present, the stripe unit is 1000 bytes 3625 and the stripe count is 10, and the dense DS file is serving 3626 iar_offset 0. A READ or WRITE to the DS for iar_offsets 0, 1000, 3627 2000, and 3000, really mean iar_offsets 10000, 20000, 30000, and 3628 40000 (implying a stripe count of 10 and a stripe unit of 1000), then 3629 an IO_ADVISE sent to the same DS with an iar_offset of 500, and a 3630 iar_count of 3000 means that the IO_ADVISE applies to these byte 3631 ranges of the dense DS file: 3633 - 500 to 999 3634 - 1000 to 1999 3635 - 2000 to 2999 3636 - 3000 to 3499 3638 I.e., the contiguous range 500 to 3499 as specified in IO_ADVISE. 3640 It also applies to these byte ranges of the logical file: 3642 - 10500 to 10999 (500 bytes) 3643 - 20000 to 20999 (1000 bytes) 3644 - 30000 to 30999 (1000 bytes) 3645 - 40000 to 40499 (500 bytes) 3646 (total 3000 bytes) 3648 E.g., if NFL4_UFLG_DENSE is absent, the stripe unit is 250 bytes, the 3649 stripe count is 4, and the sparse DS file is serving iar_offset 0. 3650 Then a READ or WRITE to the DS for iar_offsets 0, 1000, 2000, and 3651 3000, really mean iar_offsets 0, 1000, 2000, and 3000 in the logical 3652 file, keeping in mind that on the DS file,. byte ranges 250 to 999, 3653 1250 to 1999, 2250 to 2999, and 3250 to 3999 are not accessible. 3654 Then an IO_ADVISE sent to the same DS with an iar_offset of 500, and 3655 a iar_count of 3000 means that the IO_ADVISE applies to these byte 3656 ranges of the logical file and the sparse DS file: 3658 - 500 to 999 (500 bytes) - no effect 3659 - 1000 to 1249 (250 bytes) - effective 3660 - 1250 to 1999 (750 bytes) - no effect 3661 - 2000 to 2249 (250 bytes) - effective 3662 - 2250 to 2999 (750 bytes) - no effect 3663 - 3000 to 3249 (250 bytes) - effective 3664 - 3250 to 3499 (250 bytes) - no effect 3665 (subtotal 2250 bytes) - no effect 3666 (subtotal 750 bytes) - effective 3667 (grand total 3000 bytes) - no effect + effective 3669 If neither of the flags NFL42_UFLG_IO_ADVISE_THRU_MDS and 3670 NFL4_UFLG_DENSE are set in the layout, then any IO_ADVISE request 3671 sent to the data server with a byte range that overlaps stripe unit 3672 that the data server does not serve MUST NOT result in the status 3673 NFS4ERR_PNFS_IO_HOLE. Instead, the response SHOULD be successful and 3674 if the server applies IO_ADVISE hints on any stripe units that 3675 overlap with the specified range, those hints SHOULD be indicated in 3676 the response. 3678 14.9. Changes to Operation 51: LAYOUTRETURN 3680 14.9.1. Introduction 3682 In the pNFS description provided in [1], the client is not capable to 3683 relay an error code from the DS to the MDS. In the specification of 3684 the Objects-Based Layout protocol [7], use is made of the opaque 3685 lrf_body field of the LAYOUTRETURN argument to do such a relaying of 3686 error codes. In this section, we define a new data structure to 3687 enable the passing of error codes back to the MDS and provide some 3688 guidelines on what both the client and MDS should expect in such 3689 circumstances. 3691 There are two broad classes of errors, transient and persistent. The 3692 client SHOULD strive to only use this new mechanism to report 3693 persistent errors. It MUST be able to deal with transient issues by 3694 itself. Also, while the client might consider an issue to be 3695 persistent, it MUST be prepared for the MDS to consider such issues 3696 to be transient. A prime example of this is if the MDS fences off a 3697 client from either a stateid or a filehandle. The client will get an 3698 error from the DS and might relay either NFS4ERR_ACCESS or 3699 NFS4ERR_BAD_STATEID back to the MDS, with the belief that this is a 3700 hard error. If the MDS is informed by the client that there is an 3701 error, it can safely ignore that. For it, the mission is 3702 accomplished in that the client has returned a layout that the MDS 3703 had most likely recalled. 3705 The client might also need to inform the MDS that it cannot reach one 3706 or more of the DSes. While the MDS can detect the connectivity of 3707 both of these paths: 3709 o MDS to DS 3711 o MDS to client 3713 it cannot determine if the client and DS path is working. As with 3714 the case of the DS passing errors to the client, it must be prepared 3715 for the MDS to consider such outages as being transitory. 3717 The existing LAYOUTRETURN operation is extended by introducing a new 3718 data structure to report errors, layoutreturn_device_error4. Also, 3719 layoutreturn_device_error4 is introduced to enable an array of errors 3720 to be reported. 3722 14.9.2. ARGUMENT 3724 The ARGUMENT specification of the LAYOUTRETURN operation in section 3725 18.44.1 of [1] is augmented by the following XDR code [23]: 3727 struct layoutreturn_device_error4 { 3728 deviceid4 lrde_deviceid; 3729 nfsstat4 lrde_status; 3730 nfs_opnum4 lrde_opnum; 3731 }; 3733 struct layoutreturn_error_report4 { 3734 layoutreturn_device_error4 lrer_errors<>; 3735 }; 3737 14.9.3. RESULT 3739 The RESULT of the LAYOUTRETURN operation is unchanged; see section 3740 18.44.2 of [1]. 3742 14.9.4. DESCRIPTION 3744 The following text is added to the end of the LAYOUTRETURN operation 3745 DESCRIPTION in section 18.44.3 of [1]. 3747 When a client uses LAYOUTRETURN with a type of LAYOUTRETURN4_FILE, 3748 then if the lrf_body field is NULL, it indicates to the MDS that the 3749 client experienced no errors. If lrf_body is non-NULL, then the 3750 field references error information which is layout type specific. 3751 I.e., the Objects-Based Layout protocol can continue to utilize 3752 lrf_body as specified in [7]. For both Files-Based and Block-Based 3753 Layouts, the field references a layoutreturn_device_error4, which 3754 contains an array of layoutreturn_device_error4. 3756 Each individual layoutreturn_device_error4 describes a single error 3757 associated with a DS, which is identified via lrde_deviceid. The 3758 operation which returned the error is identified via lrde_opnum. 3759 Finally the NFS error value (nfsstat4) encountered is provided via 3760 lrde_status and may consist of the following error codes: 3762 NFS4ERR_NXIO: The client was unable to establish any communication 3763 with the DS. 3765 NFS4ERR_*: The client was able to establish communication with the 3766 DS and is returning one of the allowed error codes for the 3767 operation denoted by lrde_opnum. 3769 14.9.5. IMPLEMENTATION 3771 The following text is added to the end of the LAYOUTRETURN operation 3772 IMPLEMENTATION in section 18.4.4 of [1]. 3774 Clients are expected to tolerate transient storage device errors, and 3775 hence clients SHOULD NOT use the LAYOUTRETURN error handling for 3776 device access problems that may be transient. The methods by which a 3777 client decides whether a device access problem is transient vs. 3778 persistent are implementation-specific, but may include retrying I/Os 3779 to a data server under appropriate conditions. 3781 When an I/O fails to a storage device, the client SHOULD retry the 3782 failed I/O via the MDS. In this situation, before retrying the I/O, 3783 the client SHOULD return the layout, or the affected portion thereof, 3784 and SHOULD indicate which storage device or devices was problematic. 3785 The client needs to do this when the DS is being unresponsive in 3786 order to fence off any failed write attempts, and ensure that they do 3787 not end up overwriting any later data being written through the MDS. 3788 If the client does not do this, the MDS MAY issue a layout recall 3789 callback in order to perform the retried I/O. 3791 The client needs to be cognizant that since this error handling is 3792 optional in the MDS, the MDS may silently ignore this functionality. 3793 Also, as the MDS may consider some issues the client reports to be 3794 expected (see Section 14.9.1), the client might find it difficult to 3795 detect a MDS which has not implemented error handling via 3796 LAYOUTRETURN. 3798 If an MDS is aware that a storage device is proving problematic to a 3799 client, the MDS SHOULD NOT include that storage device in any pNFS 3800 layouts sent to that client. If the MDS is aware that a storage 3801 device is affecting many clients, then the MDS SHOULD NOT include 3802 that storage device in any pNFS layouts sent out. If a client asks 3803 for a new layout for the file from the MDS, it MUST be prepared for 3804 the MDS to return that storage device in the layout. The MDS might 3805 not have any choice in using the storage device, i.e., there might 3806 only be one possible layout for the system. Also, in the case of 3807 existing files, the MDS might have no choice in which storage devices 3808 to hand out to clients. 3810 The MDS is not required to indefinitely retain per-client storage 3811 device error information. An MDS is also not required to 3812 automatically reinstate use of a previously problematic storage 3813 device; administrative intervention may be required instead. 3815 14.10. Operation 65: READ_PLUS 3817 14.10.1. ARGUMENT 3819 struct READ_PLUS4args { 3820 /* CURRENT_FH: file */ 3821 stateid4 rpa_stateid; 3822 offset4 rpa_offset; 3823 count4 rpa_count; 3824 }; 3826 14.10.2. RESULT 3828 struct data_info4 { 3829 offset4 di_offset; 3830 length4 di_length; 3831 bool di_allocated; 3832 }; 3834 struct data4 { 3835 offset4 d_offset; 3836 bool d_allocated; 3837 opaque d_data<>; 3838 }; 3839 union read_plus_content switch (data_content4 rpc_content) { 3840 case NFS4_CONTENT_DATA: 3841 data4 rpc_data; 3842 case NFS4_CONTENT_APP_DATA_HOLE: 3843 app_data_hole4 rpc_adh; 3844 case NFS4_CONTENT_HOLE: 3845 data_info4 rpc_hole; 3846 default: 3847 void; 3848 }; 3850 /* 3851 * Allow a return of an array of contents. 3852 */ 3853 struct read_plus_res4 { 3854 bool rpr_eof; 3855 read_plus_content rpr_contents<>; 3856 }; 3858 union READ_PLUS4res switch (nfsstat4 rp_status) { 3859 case NFS4_OK: 3860 read_plus_res4 rp_resok4; 3861 default: 3862 void; 3863 }; 3865 14.10.3. DESCRIPTION 3867 The READ_PLUS operation is based upon the NFSv4.1 READ operation (see 3868 Section 18.22 of [1]) and similarly reads data from the regular file 3869 identified by the current filehandle. 3871 The client provides a rpa_offset of where the READ_PLUS is to start 3872 and a rpa_count of how many bytes are to be read. A rpa_offset of 3873 zero means to read data starting at the beginning of the file. If 3874 rpa_offset is greater than or equal to the size of the file, the 3875 status NFS4_OK is returned with di_length (the data length) set to 3876 zero and eof set to TRUE. 3878 The READ_PLUS result is comprised of an array of rpr_contents, each 3879 of which describe a data_content4 type of data (Section 7.1.2). For 3880 NFSv4.2, the allowed values are data, ADH, and hole. A server is 3881 required to support the data type, but neither ADH nor hole. Both an 3882 ADH and a hole must be returned in its entirety - clients must be 3883 prepared to get more information than they requested. Both the start 3884 and the end of the hole may exceed what was requested. The array 3885 contents MUST be contiguous in the file. 3887 READ_PLUS has to support all of the errors which are returned by READ 3888 plus NFS4ERR_UNION_NOTSUPP. If the client asks for a hole and the 3889 server does not support that arm of the discriminated union, but does 3890 support one or more additional arms, it can signal to the client that 3891 it supports the operation, but not the arm with 3892 NFS4ERR_UNION_NOTSUPP. 3894 If the data to be returned is comprised entirely of zeros, then the 3895 server may elect to return that data as a hole. The server 3896 differentiates this to the client by setting di_allocated to TRUE in 3897 this case. Note that in such a scenario, the server is not required 3898 to determine the full extent of the "hole" - it does not need to 3899 determine where the zeros start and end. If the server elects to 3900 return the hole as data, then it can set the d_allocted to FALSE in 3901 the rpc_data to indicate it is a hole. 3903 The server may elect to return adjacent elements of the same type. 3904 For example, the guard pattern or block size of an ADH might change, 3905 which would require adjacent elements of type ADH. Likewise if the 3906 server has a range of data comprised entirely of zeros and then a 3907 hole, it might want to return two adjacent holes to the client. 3909 If the client specifies a rpa_count value of zero, the READ_PLUS 3910 succeeds and returns zero bytes of data. In all situations, the 3911 server may choose to return fewer bytes than specified by the client. 3912 The client needs to check for this condition and handle the condition 3913 appropriately. 3915 If the client specifies an rpa_offset and rpa_count value that is 3916 entirely contained within a hole of the file, then the di_offset and 3917 di_length returned must be for the entire hole. This result is 3918 considered valid until the file is changed (detected via the change 3919 attribute). The server MUST provide the same semantics for the hole 3920 as if the client read the region and received zeroes; the implied 3921 holes contents lifetime MUST be exactly the same as any other read 3922 data. 3924 If the client specifies an rpa_offset and rpa_count value that begins 3925 in a non-hole of the file but extends into hole the server should 3926 return an array comprised of both data and a hole. The client MUST 3927 be prepared for the server to return a short read describing just the 3928 data. The client will then issue another READ_PLUS for the remaining 3929 bytes, which the server will respond with information about the hole 3930 in the file. 3932 Except when special stateids are used, the stateid value for a 3933 READ_PLUS request represents a value returned from a previous byte- 3934 range lock or share reservation request or the stateid associated 3935 with a delegation. The stateid identifies the associated owners if 3936 any and is used by the server to verify that the associated locks are 3937 still valid (e.g., have not been revoked). 3939 If the read ended at the end-of-file (formally, in a correctly formed 3940 READ_PLUS operation, if rpa_offset + rpa_count is equal to the size 3941 of the file), or the READ_PLUS operation extends beyond the size of 3942 the file (if rpa_offset + rpa_count is greater than the size of the 3943 file), eof is returned as TRUE; otherwise, it is FALSE. A successful 3944 READ_PLUS of an empty file will always return eof as TRUE. 3946 If the current filehandle is not an ordinary file, an error will be 3947 returned to the client. In the case that the current filehandle 3948 represents an object of type NF4DIR, NFS4ERR_ISDIR is returned. If 3949 the current filehandle designates a symbolic link, NFS4ERR_SYMLINK is 3950 returned. In all other cases, NFS4ERR_WRONG_TYPE is returned. 3952 For a READ_PLUS with a stateid value of all bits equal to zero, the 3953 server MAY allow the READ_PLUS to be serviced subject to mandatory 3954 byte-range locks or the current share deny modes for the file. For a 3955 READ_PLUS with a stateid value of all bits equal to one, the server 3956 MAY allow READ_PLUS operations to bypass locking checks at the 3957 server. 3959 On success, the current filehandle retains its value. 3961 14.10.4. IMPLEMENTATION 3963 In general, the IMPLEMENTATION notes for READ in Section 18.22.4 of 3964 [1] also apply to READ_PLUS. One delta is that when the owner has a 3965 locked byte range, the server MUST return an array of rpr_contents 3966 with values inside that range. 3968 14.10.4.1. Additional pNFS Implementation Information 3970 With pNFS, the semantics of using READ_PLUS remains the same. Any 3971 data server MAY return a hole or ADH result for a READ_PLUS request 3972 that it receives. When a data server chooses to return such a 3973 result, it has the option of returning information for the data 3974 stored on that data server (as defined by the data layout), but it 3975 MUST NOT return results for a byte range that includes data managed 3976 by another data server. 3978 A data server should do its best to return as much information about 3979 a ADH as is feasible without having to contact the metadata server. 3980 If communication with the metadata server is required, then every 3981 attempt should be taken to minimize the number of requests. 3983 If mandatory locking is enforced, then the data server must also 3984 ensure that to return only information that is within the owner's 3985 locked byte range. 3987 14.10.5. READ_PLUS with Sparse Files Example 3989 The following table describes a sparse file. For each byte range, 3990 the file contains either non-zero data or a hole. In addition, the 3991 server in this example uses a Hole Threshold of 32K. 3993 +-------------+----------+ 3994 | Byte-Range | Contents | 3995 +-------------+----------+ 3996 | 0-15999 | Hole | 3997 | 16K-31999 | Non-Zero | 3998 | 32K-255999 | Hole | 3999 | 256K-287999 | Non-Zero | 4000 | 288K-353999 | Hole | 4001 | 354K-417999 | Non-Zero | 4002 +-------------+----------+ 4004 Table 7 4006 Under the given circumstances, if a client was to read from the file 4007 with a max read size of 64K, the following will be the results for 4008 the given READ_PLUS calls. This assumes the client has already 4009 opened the file, acquired a valid stateid ('s' in the example), and 4010 just needs to issue READ_PLUS requests. 4012 1. READ_PLUS(s, 0, 64K) --> NFS_OK, eof = false, . Since the first hole is less than the server's 4014 Hole Threshhold, the first 32K of the file is returned as data 4015 and the remaining 32K is returned as a hole which actually 4016 extends to 256K. 4018 2. READ_PLUS(s, 32K, 64K) --> NFS_OK, eof = false, 4019 The requested range was all zeros, and the current hole begins at 4020 offset 32K and is 224K in length. Note that the client should 4021 not have followed up the previous READ_PLUS request with this one 4022 as the hole information from the previous call extended past what 4023 the client was requesting. 4025 3. READ_PLUS(s, 256K, 64K) --> NFS_OK, eof = false, . Returns an array of the 32K data and 4027 the hole which extends to 354K. 4029 4. READ_PLUS(s, 354K, 64K) --> NFS_OK, eof = true, . Returns the final 64K of data and informs the client 4031 there is no more data in the file. 4033 14.11. Operation 66: SEEK 4035 SEEK is an operation that allows a client to determine the location 4036 of the next data_content4 in a file. It allows an implementation of 4037 the emerging extension to lseek(2) to allow clients to determine 4038 SEEK_HOLE and SEEK_DATA. 4040 14.11.1. ARGUMENT 4042 struct SEEK4args { 4043 /* CURRENT_FH: file */ 4044 stateid4 sa_stateid; 4045 offset4 sa_offset; 4046 data_content4 sa_what; 4047 }; 4049 14.11.2. RESULT 4051 union seek_content switch (data_content4 content) { 4052 case NFS4_CONTENT_DATA: 4053 data_info4 sc_data; 4054 case NFS4_CONTENT_APP_DATA_HOLE: 4055 app_data_hole4 sc_adh; 4056 case NFS4_CONTENT_HOLE: 4057 data_info4 sc_hole; 4058 default: 4059 void; 4060 }; 4062 struct seek_res4 { 4063 bool sr_eof; 4064 seek_content sr_contents; 4065 }; 4067 union SEEK4res switch (nfsstat4 status) { 4068 case NFS4_OK: 4069 seek_res4 resok4; 4070 default: 4071 void; 4072 }; 4074 14.11.3. DESCRIPTION 4076 From the given sa_offset, find the next data_content4 of type sa_what 4077 in the file. For either a hole or ADH, this must return the 4078 data_content4 in its entirety. For data, it must not return the 4079 actual data. 4081 SEEK must follow the same rules for stateids as READ_PLUS 4082 (Section 14.10.3). 4084 If the server could not find a corresponding sa_what, then the status 4085 would still be NFS4_OK, but sr_eof would be TRUE. The sr_contents 4086 would contain a zero-ed out content of the appropriate type. 4088 15. NFSv4.2 Callback Operations 4090 15.1. Operation 15: CB_OFFLOAD - Report results of an asynchronous 4091 operation 4093 15.1.1. ARGUMENT 4095 struct write_response4 { 4096 stateid4 wr_callback_id<1>; 4097 count4 wr_count; 4098 stable_how4 wr_committed; 4099 verifier4 wr_writeverf; 4100 }; 4102 union offload_info4 switch (nfsstat4 coa_status) { 4103 case NFS4_OK: 4104 write_response4 coa_resok4; 4105 default: 4106 length4 coa_bytes_copied; 4107 }; 4109 struct CB_OFFLOAD4args { 4110 nfs_fh4 coa_fh; 4111 stateid4 coa_stateid; 4112 offload_info4 coa_offload_info; 4113 }; 4115 15.1.2. RESULT 4117 struct CB_OFFLOAD4res { 4118 nfsstat4 cor_status; 4119 }; 4121 15.1.3. DESCRIPTION 4123 CB_OFFLOAD is used to report to the client the results of an 4124 asynchronous operation, e.g., Server-side Copy or a hole punch. The 4125 coa_fh and coa_stateid identify the transaction and the coa_status 4126 indicates success or failure. The coa_resok4.wr_callback_id MUST NOT 4127 be set. If the transaction failed, then the coa_bytes_copied 4128 contains the number of bytes copied before the failure occurred. The 4129 coa_bytes_copied value indicates the number of bytes copied but not 4130 which specific bytes have been copied. 4132 If the client supports either 4134 1. the COPY operation 4136 2. the WRITE_PLUS operation and any arm of the discriminated union 4137 other than NFS4_CONTENT_DATA 4139 then the client is REQUIRED to support the CB_OFFLOAD operation. 4141 There is a potential race between the reply to the original 4142 transaction on the forechannel and the CB_OFFLOAD callback on the 4143 backchannel. Sections 2.10.6.3 and 20.9.3 in [1] describes how to 4144 handle this type of issue. 4146 15.1.3.1. Server-side Copy 4148 CB_OFFLOAD is used for both intra- and inter-server asynchronous 4149 copies. This operation is sent by the destination server to the 4150 client in a CB_COMPOUND request. Upon success, the 4151 coa_resok4.wr_count presents the total number of bytes copied. 4153 15.1.3.2. WRITE_PLUS 4155 CB_OFFLOAD is used to report the completion of either a hole punch or 4156 an ADH initialization. Upon success, the coa_resok4 will contain the 4157 same information that a synchronous WRITE_PLUS would have returned. 4159 16. IANA Considerations 4161 This section uses terms that are defined in [24]. 4163 17. References 4165 17.1. Normative References 4167 [1] Shepler, S., Eisler, M., and D. Noveck, "Network File System 4168 (NFS) Version 4 Minor Version 1 Protocol", RFC 5661, 4169 January 2010. 4171 [2] Haynes, T., "Network File System (NFS) Version 4 Minor Version 4172 2 External Data Representation Standard (XDR) Description", 4173 March 2013. 4175 [3] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 4176 Resource Identifier (URI): Generic Syntax", STD 66, RFC 3986, 4177 January 2005. 4179 [4] Haynes, T. and N. Williams, "Remote Procedure Call (RPC) 4180 Security Version 3", draft-williams-rpcsecgssv3 (work in 4181 progress), 2011. 4183 [5] The Open Group, "Section 'posix_fadvise()' of System Interfaces 4184 of The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 4185 2004 Edition", 2004. 4187 [6] Eisler, M., Chiu, A., and L. Ling, "RPCSEC_GSS Protocol 4188 Specification", RFC 2203, September 1997. 4190 [7] Halevy, B., Welch, B., and J. Zelenka, "Object-Based Parallel 4191 NFS (pNFS) Operations", RFC 5664, January 2010. 4193 17.2. Informative References 4195 [8] Bradner, S., "Key words for use in RFCs to Indicate Requirement 4196 Levels", March 1997. 4198 [9] Haynes, T. and D. Noveck, "Network File System (NFS) version 4 4199 Protocol", draft-ietf-nfsv4-rfc3530bis-25 (Work In Progress), 4200 February 2013. 4202 [10] Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik, 4203 "NSDB Protocol for Federated Filesystems", 4204 draft-ietf-nfsv4-federated-fs-protocol (Work In Progress), 4205 2010. 4207 [11] Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik, 4208 "Administration Protocol for Federated Filesystems", 4209 draft-ietf-nfsv4-federated-fs-admin (Work In Progress), 2010. 4211 [12] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., 4212 Leach, P., and T. Berners-Lee, "Hypertext Transfer Protocol -- 4213 HTTP/1.1", RFC 2616, June 1999. 4215 [13] Postel, J. and J. Reynolds, "File Transfer Protocol", STD 9, 4216 RFC 959, October 1985. 4218 [14] Simpson, W., "PPP Challenge Handshake Authentication Protocol 4219 (CHAP)", RFC 1994, August 1996. 4221 [15] Strohm, R., "Chapter 2, Data Blocks, Extents, and Segments, of 4222 Oracle Database Concepts 11g Release 1 (11.1)", January 2011. 4224 [16] Ashdown, L., "Chapter 15, Validating Database Files and 4225 Backups, of Oracle Database Backup and Recovery User's Guide 4226 11g Release 1 (11.1)", August 2008. 4228 [17] McDougall, R. and J. Mauro, "Section 11.4.3, Detecting Memory 4229 Corruption of Solaris Internals", 2007. 4231 [18] Bairavasundaram, L., Goodson, G., Schroeder, B., Arpaci- 4232 Dusseau, A., and R. Arpaci-Dusseau, "An Analysis of Data 4233 Corruption in the Storage Stack", Proceedings of the 6th USENIX 4234 Symposium on File and Storage Technologies (FAST '08) , 2008. 4236 [19] Haynes, T., "Requirements for Labeled NFS", 4237 draft-ietf-nfsv4-labreqs-03 (work in progress). 4239 [20] "Section 46.6. Multi-Level Security (MLS) of Deployment Guide: 4240 Deployment, configuration and administration of Red Hat 4241 Enterprise Linux 5, Edition 6", 2011. 4243 [21] Quigley, D. and J. Lu, "Registry Specification for MAC Security 4244 Label Formats", draft-quigley-label-format-registry (work in 4245 progress), 2011. 4247 [22] ISEG, "IESG Processing of RFC Errata for the IETF Stream", 4248 2008. 4250 [23] Eisler, M., "XDR: External Data Representation Standard", 4251 RFC 4506, May 2006. 4253 [24] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA 4254 Considerations Section in RFCs", BCP 26, RFC 5226, May 2008. 4256 Appendix A. Acknowledgments 4258 For the pNFS Access Permissions Check, the original draft was by 4259 Sorin Faibish, David Black, Mike Eisler, and Jason Glasgow. The work 4260 was influenced by discussions with Benny Halevy and Bruce Fields. A 4261 review was done by Tom Haynes. 4263 For the Sharing change attribute implementation details with NFSv4 4264 clients, the original draft was by Trond Myklebust. 4266 For the NFS Server-side Copy, the original draft was by James 4267 Lentini, Mike Eisler, Deepak Kenchammana, Anshul Madan, and Rahul 4268 Iyer. Tom Talpey co-authored an unpublished version of that 4269 document. It was also was reviewed by a number of individuals: 4270 Pranoop Erasani, Tom Haynes, Arthur Lent, Trond Myklebust, Dave 4271 Noveck, Theresa Lingutla-Raj, Manjunath Shankararao, Satyam Vaghani, 4272 and Nico Williams. 4274 For the NFS space reservation operations, the original draft was by 4275 Mike Eisler, James Lentini, Manjunath Shankararao, and Rahul Iyer. 4277 For the sparse file support, the original draft was by Dean 4278 Hildebrand and Marc Eshel. Valuable input and advice was received 4279 from Sorin Faibish, Bruce Fields, Benny Halevy, Trond Myklebust, and 4280 Richard Scheffenegger. 4282 For the Application IO Hints, the original draft was by Dean 4283 Hildebrand, Mike Eisler, Trond Myklebust, and Sam Falkner. Some 4284 early reviewers included Benny Halevy and Pranoop Erasani. 4286 For Labeled NFS, the original draft was by David Quigley, James 4287 Morris, Jarret Lu, and Tom Haynes. Peter Staubach, Trond Myklebust, 4288 Stephen Smalley, Sorrin Faibish, Nico Williams, and David Black also 4289 contributed in the final push to get this accepted. 4291 During the review process, Talia Reyes-Ortiz helped the sessions run 4292 smoothly. While many people contributed here and there, the core 4293 reviewers were Andy Adamson, Pranoop Erasani, Bruce Fields, Chuck 4294 Lever, Trond Myklebust, David Noveck, Peter Staubach, and Mike 4295 Kupfer. 4297 Appendix B. RFC Editor Notes 4299 [RFC Editor: please remove this section prior to publishing this 4300 document as an RFC] 4302 [RFC Editor: prior to publishing this document as an RFC, please 4303 replace all occurrences of RFCTBD10 with RFCxxxx where xxxx is the 4304 RFC number of this document] 4306 Author's Address 4308 Thomas Haynes (editor) 4309 NetApp 4310 9110 E 66th St 4311 Tulsa, OK 74133 4312 USA 4314 Phone: +1 918 307 1415 4315 Email: thomas@netapp.com 4316 URI: http://www.tulsalabs.com