idnits 2.17.1 draft-ietf-nfsv4-minorversion2-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 5 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 5 instances of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Furthermore, each DS MUST not report to a client either a sparse ADB or data which belongs to another DS. One implication of this requirement is that the app_data_block4's adb_block_size MUST be either be the stripe width or the stripe width must be an even multiple of it. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The second change is to provide a method for the server to notify the client that the attribute changed on an open file on the server. If the file is closed, then during the open attempt, the client will gather the new attribute value. The server MUST not communicate the new value of the attribute, the client MUST query it. This requirement stems from the need for the client to provide sufficient access rights to the attribute. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: When a data server chooses to return a READ_HOLE result, it has the option of returning hole information for the data stored on that data server (as defined by the data layout), but it MUST not return a nfs_readplusreshole structure with a byte range that includes data managed by another data server. == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 04, 2012) is 4489 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: '0' is mentioned on line 858, but not defined == Unused Reference: '9' is defined on line 4027, but no explicit reference was found in the text == Unused Reference: '10' is defined on line 4031, but no explicit reference was found in the text == Unused Reference: '28' is defined on line 4100, but no explicit reference was found in the text == Unused Reference: '29' is defined on line 4103, but no explicit reference was found in the text == Unused Reference: '30' is defined on line 4106, but no explicit reference was found in the text == Unused Reference: '31' is defined on line 4109, but no explicit reference was found in the text == Unused Reference: '32' is defined on line 4113, but no explicit reference was found in the text == Unused Reference: '33' is defined on line 4115, but no explicit reference was found in the text == Unused Reference: '34' is defined on line 4118, but no explicit reference was found in the text == Unused Reference: '35' is defined on line 4121, but no explicit reference was found in the text == Unused Reference: '36' is defined on line 4124, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '1' ** Obsolete normative reference: RFC 5661 (ref. '2') (Obsoleted by RFC 8881) -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '6' == Outdated reference: A later version (-35) exists of draft-ietf-nfsv4-rfc3530bis-09 -- Obsolete informational reference (is this intentional?): RFC 2616 (ref. '14') (Obsoleted by RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235) -- Obsolete informational reference (is this intentional?): RFC 5226 (ref. '27') (Obsoleted by RFC 8126) -- Obsolete informational reference (is this intentional?): RFC 3530 (ref. '36') (Obsoleted by RFC 7530) Summary: 1 error (**), 0 flaws (~~), 20 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 T. Haynes 3 Internet-Draft Editor 4 Intended status: Standards Track January 04, 2012 5 Expires: July 7, 2012 7 NFS Version 4 Minor Version 2 8 draft-ietf-nfsv4-minorversion2-07.txt 10 Abstract 12 This Internet-Draft describes NFS version 4 minor version two, 13 focusing mainly on the protocol extensions made from NFS version 4 14 minor version 0 and NFS version 4 minor version 1. Major extensions 15 introduced in NFS version 4 minor version two include: Server-side 16 Copy, Space Reservations, and Support for Sparse Files. 18 Requirements Language 20 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 21 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 22 document are to be interpreted as described in RFC 2119 [1]. 24 Status of this Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on July 7, 2012. 41 Copyright Notice 43 Copyright (c) 2012 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 This document may contain material from IETF Documents or IETF 57 Contributions published or made publicly available before November 58 10, 2008. The person(s) controlling the copyright in some of this 59 material may not have granted the IETF Trust the right to allow 60 modifications of such material outside the IETF Standards Process. 61 Without obtaining an adequate license from the person(s) controlling 62 the copyright in such materials, this document may not be modified 63 outside the IETF Standards Process, and derivative works of it may 64 not be created outside the IETF Standards Process, except to format 65 it for publication as an RFC or to translate it into languages other 66 than English. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 71 1.1. The NFS Version 4 Minor Version 2 Protocol . . . . . . . 5 72 1.2. Scope of This Document . . . . . . . . . . . . . . . . . 5 73 1.3. NFSv4.2 Goals . . . . . . . . . . . . . . . . . . . . . . 5 74 1.4. Overview of NFSv4.2 Features . . . . . . . . . . . . . . 5 75 1.4.1. Sparse Files . . . . . . . . . . . . . . . . . . . . . 5 76 1.4.2. Application I/O Advise . . . . . . . . . . . . . . . . 6 77 1.5. Differences from NFSv4.1 . . . . . . . . . . . . . . . . 6 78 2. NFS Server-side Copy . . . . . . . . . . . . . . . . . . . . . 6 79 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 6 80 2.2. Protocol Overview . . . . . . . . . . . . . . . . . . . . 7 81 2.2.1. Intra-Server Copy . . . . . . . . . . . . . . . . . . 8 82 2.2.2. Inter-Server Copy . . . . . . . . . . . . . . . . . . 9 83 2.2.3. Server-to-Server Copy Protocol . . . . . . . . . . . . 12 84 2.3. Operations . . . . . . . . . . . . . . . . . . . . . . . 14 85 2.3.1. netloc4 - Network Locations . . . . . . . . . . . . . 14 86 2.3.2. Copy Offload Stateids . . . . . . . . . . . . . . . . 15 87 2.4. Security Considerations . . . . . . . . . . . . . . . . . 15 88 2.4.1. Inter-Server Copy Security . . . . . . . . . . . . . . 15 89 3. Sparse Files . . . . . . . . . . . . . . . . . . . . . . . . . 24 90 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 24 91 3.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 24 92 3.3. Determining the next hole/data . . . . . . . . . . . . . 25 93 4. Space Reservation . . . . . . . . . . . . . . . . . . . . . . 25 94 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 25 95 5. Support for Application IO Hints . . . . . . . . . . . . . . . 27 96 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 27 97 5.2. POSIX Requirements . . . . . . . . . . . . . . . . . . . 28 98 5.3. Additional Requirements . . . . . . . . . . . . . . . . . 29 99 5.4. Security Considerations . . . . . . . . . . . . . . . . . 30 100 5.5. IANA Considerations . . . . . . . . . . . . . . . . . . . 30 101 6. Application Data Block Support . . . . . . . . . . . . . . . . 30 102 6.1. Generic Framework . . . . . . . . . . . . . . . . . . . . 31 103 6.1.1. Data Block Representation . . . . . . . . . . . . . . 31 104 6.1.2. Data Content . . . . . . . . . . . . . . . . . . . . . 32 105 6.2. pNFS Considerations . . . . . . . . . . . . . . . . . . . 32 106 6.3. An Example of Detecting Corruption . . . . . . . . . . . 33 107 6.4. Example of READ_PLUS . . . . . . . . . . . . . . . . . . 34 108 6.5. Zero Filled Holes . . . . . . . . . . . . . . . . . . . . 35 109 7. Labeled NFS . . . . . . . . . . . . . . . . . . . . . . . . . 35 110 7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 35 111 7.2. Definitions . . . . . . . . . . . . . . . . . . . . . . . 36 112 7.3. MAC Security Attribute . . . . . . . . . . . . . . . . . 37 113 7.3.1. Interpreting FATTR4_SEC_LABEL . . . . . . . . . . . . 37 114 7.3.2. Delegations . . . . . . . . . . . . . . . . . . . . . 38 115 7.3.3. Permission Checking . . . . . . . . . . . . . . . . . 38 116 7.3.4. Object Creation . . . . . . . . . . . . . . . . . . . 39 117 7.3.5. Existing Objects . . . . . . . . . . . . . . . . . . . 39 118 7.3.6. Label Changes . . . . . . . . . . . . . . . . . . . . 39 119 7.4. pNFS Considerations . . . . . . . . . . . . . . . . . . . 40 120 7.5. Discovery of Server LNFS Support . . . . . . . . . . . . 40 121 7.6. MAC Security NFS Modes of Operation . . . . . . . . . . . 41 122 7.6.1. Full Mode . . . . . . . . . . . . . . . . . . . . . . 41 123 7.6.2. Smart Client Mode . . . . . . . . . . . . . . . . . . 42 124 7.6.3. Smart Server Mode . . . . . . . . . . . . . . . . . . 43 125 7.7. Security Considerations . . . . . . . . . . . . . . . . . 44 126 8. Sharing change attribute implementation details with NFSv4 127 clients . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 128 8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 44 129 8.2. Definition of the 'change_attr_type' per-file system 130 attribute . . . . . . . . . . . . . . . . . . . . . . . . 45 131 9. Security Considerations . . . . . . . . . . . . . . . . . . . 46 132 10. File Attributes . . . . . . . . . . . . . . . . . . . . . . . 46 133 10.1. Attribute Definitions . . . . . . . . . . . . . . . . . . 46 134 11. Operations: REQUIRED, RECOMMENDED, or OPTIONAL . . . . . . . . 47 135 12. NFSv4.2 Operations . . . . . . . . . . . . . . . . . . . . . . 50 136 12.1. Operation 59: COPY - Initiate a server-side copy . . . . 50 137 12.2. Operation 60: COPY_ABORT - Cancel a server-side copy . . 58 138 12.3. Operation 61: COPY_NOTIFY - Notify a source server of 139 a future copy . . . . . . . . . . . . . . . . . . . . . . 59 140 12.4. Operation 62: COPY_REVOKE - Revoke a destination 141 server's copy privileges . . . . . . . . . . . . . . . . 62 142 12.5. Operation 63: COPY_STATUS - Poll for status of a 143 server-side copy . . . . . . . . . . . . . . . . . . . . 63 144 12.6. Modification to Operation 42: EXCHANGE_ID - 145 Instantiate Client ID . . . . . . . . . . . . . . . . . . 64 146 12.7. Operation 64: INITIALIZE . . . . . . . . . . . . . . . . 65 147 12.8. Operation 67: IO_ADVISE - Application I/O access 148 pattern hints . . . . . . . . . . . . . . . . . . . . . . 69 149 12.9. Changes to Operation 51: LAYOUTRETURN . . . . . . . . . . 75 150 12.10. Operation 65: READ_PLUS . . . . . . . . . . . . . . . . . 78 151 12.11. Operation 66: SEEK . . . . . . . . . . . . . . . . . . . 84 152 13. NFSv4.2 Callback Operations . . . . . . . . . . . . . . . . . 86 153 13.1. Procedure 16: CB_ATTR_CHANGED - Notify Client that 154 the File's Attributes Changed . . . . . . . . . . . . . . 86 155 13.2. Operation 15: CB_COPY - Report results of a 156 server-side copy . . . . . . . . . . . . . . . . . . . . 86 157 14. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 88 158 15. References . . . . . . . . . . . . . . . . . . . . . . . . . . 88 159 15.1. Normative References . . . . . . . . . . . . . . . . . . 88 160 15.2. Informative References . . . . . . . . . . . . . . . . . 89 161 Appendix A. Acknowledgments . . . . . . . . . . . . . . . . . . . 91 162 Appendix B. RFC Editor Notes . . . . . . . . . . . . . . . . . . 91 163 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 91 165 1. Introduction 167 1.1. The NFS Version 4 Minor Version 2 Protocol 169 The NFS version 4 minor version 2 (NFSv4.2) protocol is the third 170 minor version of the NFS version 4 (NFSv4) protocol. The first minor 171 version, NFSv4.0, is described in [11] and the second minor version, 172 NFSv4.1, is described in [2]. It follows the guidelines for minor 173 versioning that are listed in Section 11 of [11]. 175 As a minor version, NFSv4.2 is consistent with the overall goals for 176 NFSv4, but extends the protocol so as to better meet those goals, 177 based on experiences with NFSv4.1. In addition, NFSv4.2 has adopted 178 some additional goals, which motivate some of the major extensions in 179 NFSv4.2. 181 1.2. Scope of This Document 183 This document describes the NFSv4.2 protocol. With respect to 184 NFSv4.0 and NFSv4.1, this document does not: 186 o describe the NFSv4.0 or NFSv4.1 protocols, except where needed to 187 contrast with NFSv4.2. 189 o modify the specification of the NFSv4.0 or NFSv4.1 protocols. 191 o clarify the NFSv4.0 or NFSv4.1 protocols. I.e., any 192 clarifications made here apply to NFSv4.2 and neither of the prior 193 protocols. 195 The full XDR for NFSv4.2 is presented in [3]. 197 1.3. NFSv4.2 Goals 199 [[Comment.1: This needs fleshing out! --TH]] 201 1.4. Overview of NFSv4.2 Features 203 [[Comment.2: This needs fleshing out! --TH]] 205 1.4.1. Sparse Files 207 Two new operations are defined to support the reading of sparse files 208 (READ_PLUS) and the punching of holes to remove backing storage 209 (INITIALIZE). 211 1.4.2. Application I/O Advise 213 We propose a new IO_ADVISE operation for NFSv4.2 that clients can use 214 to communicate expected I/O behavior to the server. By communicating 215 future I/O behavior such as whether a file will be accessed 216 sequentially or randomly, and whether a file will or will not be 217 accessed in the near future, servers can optimize future I/O requests 218 for a file by, for example, prefetching or evicting data. This 219 operation can be used to support the posix_fadvise function as well 220 as other applications such as databases and video editors. 222 1.5. Differences from NFSv4.1 224 [[Comment.3: This needs fleshing out! --TH]] 226 2. NFS Server-side Copy 228 2.1. Introduction 230 This section describes a server-side copy feature for the NFS 231 protocol. 233 The server-side copy feature provides a mechanism for the NFS client 234 to perform a file copy on the server without the data being 235 transmitted back and forth over the network. 237 Without this feature, an NFS client copies data from one location to 238 another by reading the data from the server over the network, and 239 then writing the data back over the network to the server. Using 240 this server-side copy operation, the client is able to instruct the 241 server to copy the data locally without the data being sent back and 242 forth over the network unnecessarily. 244 In general, this feature is useful whenever data is copied from one 245 location to another on the server. It is particularly useful when 246 copying the contents of a file from a backup. Backup-versions of a 247 file are copied for a number of reasons, including restoring and 248 cloning data. 250 If the source object and destination object are on different file 251 servers, the file servers will communicate with one another to 252 perform the copy operation. The server-to-server protocol by which 253 this is accomplished is not defined in this document. 255 2.2. Protocol Overview 257 The server-side copy offload operations support both intra-server and 258 inter-server file copies. An intra-server copy is a copy in which 259 the source file and destination file reside on the same server. In 260 an inter-server copy, the source file and destination file are on 261 different servers. In both cases, the copy may be performed 262 synchronously or asynchronously. 264 Throughout the rest of this document, we refer to the NFS server 265 containing the source file as the "source server" and the NFS server 266 to which the file is transferred as the "destination server". In the 267 case of an intra-server copy, the source server and destination 268 server are the same server. Therefore in the context of an intra- 269 server copy, the terms source server and destination server refer to 270 the single server performing the copy. 272 The operations described below are designed to copy files. Other 273 file system objects can be copied by building on these operations or 274 using other techniques. For example if the user wishes to copy a 275 directory, the client can synthesize a directory copy by first 276 creating the destination directory and then copying the source 277 directory's files to the new destination directory. If the user 278 wishes to copy a namespace junction [12] [13], the client can use the 279 ONC RPC Federated Filesystem protocol [13] to perform the copy. 280 Specifically the client can determine the source junction's 281 attributes using the FEDFS_LOOKUP_FSN procedure and create a 282 duplicate junction using the FEDFS_CREATE_JUNCTION procedure. 284 For the inter-server copy protocol, the operations are defined to be 285 compatible with a server-to-server copy protocol in which the 286 destination server reads the file data from the source server. This 287 model in which the file data is pulled from the source by the 288 destination has a number of advantages over a model in which the 289 source pushes the file data to the destination. The advantages of 290 the pull model include: 292 o The pull model only requires a remote server (i.e., the 293 destination server) to be granted read access. A push model 294 requires a remote server (i.e., the source server) to be granted 295 write access, which is more privileged. 297 o The pull model allows the destination server to stop reading if it 298 has run out of space. In a push model, the destination server 299 must flow control the source server in this situation. 301 o The pull model allows the destination server to easily flow 302 control the data stream by adjusting the size of its read 303 operations. In a push model, the destination server does not have 304 this ability. The source server in a push model is capable of 305 writing chunks larger than the destination server has requested in 306 attributes and session parameters. In theory, the destination 307 server could perform a "short" write in this situation, but this 308 approach is known to behave poorly in practice. 310 The following operations are provided to support server-side copy: 312 COPY_NOTIFY: For inter-server copies, the client sends this 313 operation to the source server to notify it of a future file copy 314 from a given destination server for the given user. 316 COPY_REVOKE: Also for inter-server copies, the client sends this 317 operation to the source server to revoke permission to copy a file 318 for the given user. 320 COPY: Used by the client to request a file copy. 322 COPY_ABORT: Used by the client to abort an asynchronous file copy. 324 COPY_STATUS: Used by the client to poll the status of an 325 asynchronous file copy. 327 CB_COPY: Used by the destination server to report the results of an 328 asynchronous file copy to the client. 330 These operations are described in detail in Section 2.3. This 331 section provides an overview of how these operations are used to 332 perform server-side copies. 334 2.2.1. Intra-Server Copy 336 To copy a file on a single server, the client uses a COPY operation. 337 The server may respond to the copy operation with the final results 338 of the copy or it may perform the copy asynchronously and deliver the 339 results using a CB_COPY operation callback. If the copy is performed 340 asynchronously, the client may poll the status of the copy using 341 COPY_STATUS or cancel the copy using COPY_ABORT. 343 A synchronous intra-server copy is shown in Figure 1. In this 344 example, the NFS server chooses to perform the copy synchronously. 345 The copy operation is completed, either successfully or 346 unsuccessfully, before the server replies to the client's request. 347 The server's reply contains the final result of the operation. 349 Client Server 350 + + 351 | | 352 |--- COPY ---------------------------->| Client requests 353 |<------------------------------------/| a file copy 354 | | 355 | | 357 Figure 1: A synchronous intra-server copy. 359 An asynchronous intra-server copy is shown in Figure 2. In this 360 example, the NFS server performs the copy asynchronously. The 361 server's reply to the copy request indicates that the copy operation 362 was initiated and the final result will be delivered at a later time. 363 The server's reply also contains a copy stateid. The client may use 364 this copy stateid to poll for status information (as shown) or to 365 cancel the copy using a COPY_ABORT. When the server completes the 366 copy, the server performs a callback to the client and reports the 367 results. 369 Client Server 370 + + 371 | | 372 |--- COPY ---------------------------->| Client requests 373 |<------------------------------------/| a file copy 374 | | 375 | | 376 |--- COPY_STATUS --------------------->| Client may poll 377 |<------------------------------------/| for status 378 | | 379 | . | Multiple COPY_STATUS 380 | . | operations may be sent. 381 | . | 382 | | 383 |<-- CB_COPY --------------------------| Server reports results 384 |\------------------------------------>| 385 | | 387 Figure 2: An asynchronous intra-server copy. 389 2.2.2. Inter-Server Copy 391 A copy may also be performed between two servers. The copy protocol 392 is designed to accommodate a variety of network topologies. As shown 393 in Figure 3, the client and servers may be connected by multiple 394 networks. In particular, the servers may be connected by a 395 specialized, high speed network (network 192.168.33.0/24 in the 396 diagram) that does not include the client. The protocol allows the 397 client to setup the copy between the servers (over network 398 10.11.78.0/24 in the diagram) and for the servers to communicate on 399 the high speed network if they choose to do so. 401 192.168.33.0/24 402 +-------------------------------------+ 403 | | 404 | | 405 | 192.168.33.18 | 192.168.33.56 406 +-------+------+ +------+------+ 407 | Source | | Destination | 408 +-------+------+ +------+------+ 409 | 10.11.78.18 | 10.11.78.56 410 | | 411 | | 412 | 10.11.78.0/24 | 413 +------------------+------------------+ 414 | 415 | 416 | 10.11.78.243 417 +-----+-----+ 418 | Client | 419 +-----------+ 421 Figure 3: An example inter-server network topology. 423 For an inter-server copy, the client notifies the source server that 424 a file will be copied by the destination server using a COPY_NOTIFY 425 operation. The client then initiates the copy by sending the COPY 426 operation to the destination server. The destination server may 427 perform the copy synchronously or asynchronously. 429 A synchronous inter-server copy is shown in Figure 4. In this case, 430 the destination server chooses to perform the copy before responding 431 to the client's COPY request. 433 An asynchronous copy is shown in Figure 5. In this case, the 434 destination server chooses to respond to the client's COPY request 435 immediately and then perform the copy asynchronously. 437 Client Source Destination 438 + + + 439 | | | 440 |--- COPY_NOTIFY --->| | 441 |<------------------/| | 442 | | | 443 | | | 444 |--- COPY ---------------------------->| 445 | | | 446 | | | 447 | |<----- read -----| 448 | |\--------------->| 449 | | | 450 | | . | Multiple reads may 451 | | . | be necessary 452 | | . | 453 | | | 454 | | | 455 |<------------------------------------/| Destination replies 456 | | | to COPY 458 Figure 4: A synchronous inter-server copy. 460 Client Source Destination 461 + + + 462 | | | 463 |--- COPY_NOTIFY --->| | 464 |<------------------/| | 465 | | | 466 | | | 467 |--- COPY ---------------------------->| 468 |<------------------------------------/| 469 | | | 470 | | | 471 | |<----- read -----| 472 | |\--------------->| 473 | | | 474 | | . | Multiple reads may 475 | | . | be necessary 476 | | . | 477 | | | 478 | | | 479 |--- COPY_STATUS --------------------->| Client may poll 480 |<------------------------------------/| for status 481 | | | 482 | | . | Multiple COPY_STATUS 483 | | . | operations may be sent 484 | | . | 485 | | | 486 | | | 487 | | | 488 |<-- CB_COPY --------------------------| Destination reports 489 |\------------------------------------>| results 490 | | | 492 Figure 5: An asynchronous inter-server copy. 494 2.2.3. Server-to-Server Copy Protocol 496 During an inter-server copy, the destination server reads the file 497 data from the source server. The source server and destination 498 server are not required to use a specific protocol to transfer the 499 file data. The choice of what protocol to use is ultimately the 500 destination server's decision. 502 2.2.3.1. Using NFSv4.x as a Server-to-Server Copy Protocol 504 The destination server MAY use standard NFSv4.x (where x >= 1) to 505 read the data from the source server. If NFSv4.x is used for the 506 server-to-server copy protocol, the destination server can use the 507 filehandle contained in the COPY request with standard NFSv4.x 508 operations to read data from the source server. Specifically, the 509 destination server may use the NFSv4.x OPEN operation's CLAIM_FH 510 facility to open the file being copied and obtain an open stateid. 511 Using the stateid, the destination server may then use NFSv4.x READ 512 operations to read the file. 514 2.2.3.2. Using an alternative Server-to-Server Copy Protocol 516 In a homogeneous environment, the source and destination servers 517 might be able to perform the file copy extremely efficiently using 518 specialized protocols. For example the source and destination 519 servers might be two nodes sharing a common file system format for 520 the source and destination file systems. Thus the source and 521 destination are in an ideal position to efficiently render the image 522 of the source file to the destination file by replicating the file 523 system formats at the block level. Another possibility is that the 524 source and destination might be two nodes sharing a common storage 525 area network, and thus there is no need to copy any data at all, and 526 instead ownership of the file and its contents might simply be re- 527 assigned to the destination. To allow for these possibilities, the 528 destination server is allowed to use a server-to-server copy protocol 529 of its choice. 531 In a heterogeneous environment, using a protocol other than NFSv4.x 532 (e.g,. HTTP [14] or FTP [15]) presents some challenges. In 533 particular, the destination server is presented with the challenge of 534 accessing the source file given only an NFSv4.x filehandle. 536 One option for protocols that identify source files with path names 537 is to use an ASCII hexadecimal representation of the source 538 filehandle as the file name. 540 Another option for the source server is to use URLs to direct the 541 destination server to a specialized service. For example, the 542 response to COPY_NOTIFY could include the URL 543 ftp://s1.example.com:9999/_FH/0x12345, where 0x12345 is the ASCII 544 hexadecimal representation of the source filehandle. When the 545 destination server receives the source server's URL, it would use 546 "_FH/0x12345" as the file name to pass to the FTP server listening on 547 port 9999 of s1.example.com. On port 9999 there would be a special 548 instance of the FTP service that understands how to convert NFS 549 filehandles to an open file descriptor (in many operating systems, 550 this would require a new system call, one which is the inverse of the 551 makefh() function that the pre-NFSv4 MOUNT service needs). 553 Authenticating and identifying the destination server to the source 554 server is also a challenge. Recommendations for how to accomplish 555 this are given in Section 2.4.1.2.4 and Section 2.4.1.4. 557 2.3. Operations 559 In the sections that follow, several operations are defined that 560 together provide the server-side copy feature. These operations are 561 intended to be OPTIONAL operations as defined in section 17 of [2]. 562 The COPY_NOTIFY, COPY_REVOKE, COPY, COPY_ABORT, and COPY_STATUS 563 operations are designed to be sent within an NFSv4 COMPOUND 564 procedure. The CB_COPY operation is designed to be sent within an 565 NFSv4 CB_COMPOUND procedure. 567 Each operation is performed in the context of the user identified by 568 the ONC RPC credential of its containing COMPOUND or CB_COMPOUND 569 request. For example, a COPY_ABORT operation issued by a given user 570 indicates that a specified COPY operation initiated by the same user 571 be canceled. Therefore a COPY_ABORT MUST NOT interfere with a copy 572 of the same file initiated by another user. 574 An NFS server MAY allow an administrative user to monitor or cancel 575 copy operations using an implementation specific interface. 577 2.3.1. netloc4 - Network Locations 579 The server-side copy operations specify network locations using the 580 netloc4 data type shown below: 582 enum netloc_type4 { 583 NL4_NAME = 0, 584 NL4_URL = 1, 585 NL4_NETADDR = 2 586 }; 587 union netloc4 switch (netloc_type4 nl_type) { 588 case NL4_NAME: utf8str_cis nl_name; 589 case NL4_URL: utf8str_cis nl_url; 590 case NL4_NETADDR: netaddr4 nl_addr; 591 }; 593 If the netloc4 is of type NL4_NAME, the nl_name field MUST be 594 specified as a UTF-8 string. The nl_name is expected to be resolved 595 to a network address via DNS, LDAP, NIS, /etc/hosts, or some other 596 means. If the netloc4 is of type NL4_URL, a server URL [4] 597 appropriate for the server-to-server copy operation is specified as a 598 UTF-8 string. If the netloc4 is of type NL4_NETADDR, the nl_addr 599 field MUST contain a valid netaddr4 as defined in Section 3.3.9 of 600 [2]. 602 When netloc4 values are used for an inter-server copy as shown in 603 Figure 3, their values may be evaluated on the source server, 604 destination server, and client. The network environment in which 605 these systems operate should be configured so that the netloc4 values 606 are interpreted as intended on each system. 608 2.3.2. Copy Offload Stateids 610 A server may perform a copy offload operation asynchronously. An 611 asynchronous copy is tracked using a copy offload stateid. Copy 612 offload stateids are included in the COPY, COPY_ABORT, COPY_STATUS, 613 and CB_COPY operations. 615 Section 8.2.4 of [2] specifies that stateids are valid until either 616 (A) the client or server restart or (B) the client returns the 617 resource. 619 A copy offload stateid will be valid until either (A) the client or 620 server restart or (B) the client returns the resource by issuing a 621 COPY_ABORT operation or the client replies to a CB_COPY operation. 623 A copy offload stateid's seqid MUST NOT be 0 (zero). In the context 624 of a copy offload operation, it is ambiguous to indicate the most 625 recent copy offload operation using a stateid with seqid of 0 (zero). 626 Therefore a copy offload stateid with seqid of 0 (zero) MUST be 627 considered invalid. 629 2.4. Security Considerations 631 The security considerations pertaining to NFSv4 [11] apply to this 632 document. 634 The standard security mechanisms provide by NFSv4 [11] may be used to 635 secure the protocol described in this document. 637 NFSv4 clients and servers supporting the the inter-server copy 638 operations described in this document are REQUIRED to implement [5], 639 including the RPCSEC_GSSv3 privileges copy_from_auth and 640 copy_to_auth. If the server-to-server copy protocol is ONC RPC 641 based, the servers are also REQUIRED to implement the RPCSEC_GSSv3 642 privilege copy_confirm_auth. These requirements to implement are not 643 requirements to use. NFSv4 clients and servers are RECOMMENDED to 644 use [5] to secure server-side copy operations. 646 2.4.1. Inter-Server Copy Security 648 2.4.1.1. Requirements for Secure Inter-Server Copy 650 Inter-server copy is driven by several requirements: 652 o The specification MUST NOT mandate an inter-server copy protocol. 653 There are many ways to copy data. Some will be more optimal than 654 others depending on the identities of the source server and 655 destination server. For example the source and destination 656 servers might be two nodes sharing a common file system format for 657 the source and destination file systems. Thus the source and 658 destination are in an ideal position to efficiently render the 659 image of the source file to the destination file by replicating 660 the file system formats at the block level. In other cases, the 661 source and destination might be two nodes sharing a common storage 662 area network, and thus there is no need to copy any data at all, 663 and instead ownership of the file and its contents simply gets re- 664 assigned to the destination. 666 o The specification MUST provide guidance for using NFSv4.x as a 667 copy protocol. For those source and destination servers willing 668 to use NFSv4.x there are specific security considerations that 669 this specification can and does address. 671 o The specification MUST NOT mandate pre-configuration between the 672 source and destination server. Requiring that the source and 673 destination first have a "copying relationship" increases the 674 administrative burden. However the specification MUST NOT 675 preclude implementations that require pre-configuration. 677 o The specification MUST NOT mandate a trust relationship between 678 the source and destination server. The NFSv4 security model 679 requires mutual authentication between a principal on an NFS 680 client and a principal on an NFS server. This model MUST continue 681 with the introduction of COPY. 683 2.4.1.2. Inter-Server Copy with RPCSEC_GSSv3 685 When the client sends a COPY_NOTIFY to the source server to expect 686 the destination to attempt to copy data from the source server, it is 687 expected that this copy is being done on behalf of the principal 688 (called the "user principal") that sent the RPC request that encloses 689 the COMPOUND procedure that contains the COPY_NOTIFY operation. The 690 user principal is identified by the RPC credentials. A mechanism 691 that allows the user principal to authorize the destination server to 692 perform the copy in a manner that lets the source server properly 693 authenticate the destination's copy, and without allowing the 694 destination to exceed its authorization is necessary. 696 An approach that sends delegated credentials of the client's user 697 principal to the destination server is not used for the following 698 reasons. If the client's user delegated its credentials, the 699 destination would authenticate as the user principal. If the 700 destination were using the NFSv4 protocol to perform the copy, then 701 the source server would authenticate the destination server as the 702 user principal, and the file copy would securely proceed. However, 703 this approach would allow the destination server to copy other files. 704 The user principal would have to trust the destination server to not 705 do so. This is counter to the requirements, and therefore is not 706 considered. Instead an approach using RPCSEC_GSSv3 [5] privileges is 707 proposed. 709 One of the stated applications of the proposed RPCSEC_GSSv3 protocol 710 is compound client host and user authentication [+ privilege 711 assertion]. For inter-server file copy, we require compound NFS 712 server host and user authentication [+ privilege assertion]. The 713 distinction between the two is one without meaning. 715 RPCSEC_GSSv3 introduces the notion of privileges. We define three 716 privileges: 718 copy_from_auth: A user principal is authorizing a source principal 719 ("nfs@") to allow a destination principal ("nfs@ 720 ") to copy a file from the source to the destination. 721 This privilege is established on the source server before the user 722 principal sends a COPY_NOTIFY operation to the source server. 724 struct copy_from_auth_priv { 725 secret4 cfap_shared_secret; 726 netloc4 cfap_destination; 727 /* the NFSv4 user name that the user principal maps to */ 728 utf8str_mixed cfap_username; 729 /* equal to seq_num of rpc_gss_cred_vers_3_t */ 730 unsigned int cfap_seq_num; 731 }; 733 cap_shared_secret is a secret value the user principal generates. 735 copy_to_auth: A user principal is authorizing a destination 736 principal ("nfs@") to allow it to copy a file from 737 the source to the destination. This privilege is established on 738 the destination server before the user principal sends a COPY 739 operation to the destination server. 741 struct copy_to_auth_priv { 742 /* equal to cfap_shared_secret */ 743 secret4 ctap_shared_secret; 744 netloc4 ctap_source; 745 /* the NFSv4 user name that the user principal maps to */ 746 utf8str_mixed ctap_username; 747 /* equal to seq_num of rpc_gss_cred_vers_3_t */ 748 unsigned int ctap_seq_num; 749 }; 751 ctap_shared_secret is a secret value the user principal generated 752 and was used to establish the copy_from_auth privilege with the 753 source principal. 755 copy_confirm_auth: A destination principal is confirming with the 756 source principal that it is authorized to copy data from the 757 source on behalf of the user principal. When the inter-server 758 copy protocol is NFSv4, or for that matter, any protocol capable 759 of being secured via RPCSEC_GSSv3 (i.e., any ONC RPC protocol), 760 this privilege is established before the file is copied from the 761 source to the destination. 763 struct copy_confirm_auth_priv { 764 /* equal to GSS_GetMIC() of cfap_shared_secret */ 765 opaque ccap_shared_secret_mic<>; 766 /* the NFSv4 user name that the user principal maps to */ 767 utf8str_mixed ccap_username; 768 /* equal to seq_num of rpc_gss_cred_vers_3_t */ 769 unsigned int ccap_seq_num; 770 }; 772 2.4.1.2.1. Establishing a Security Context 774 When the user principal wants to COPY a file between two servers, if 775 it has not established copy_from_auth and copy_to_auth privileges on 776 the servers, it establishes them: 778 o The user principal generates a secret it will share with the two 779 servers. This shared secret will be placed in the 780 cfap_shared_secret and ctap_shared_secret fields of the 781 appropriate privilege data types, copy_from_auth_priv and 782 copy_to_auth_priv. 784 o An instance of copy_from_auth_priv is filled in with the shared 785 secret, the destination server, and the NFSv4 user id of the user 786 principal. It will be sent with an RPCSEC_GSS3_CREATE procedure, 787 and so cfap_seq_num is set to the seq_num of the credential of the 788 RPCSEC_GSS3_CREATE procedure. Because cfap_shared_secret is a 789 secret, after XDR encoding copy_from_auth_priv, GSS_Wrap() (with 790 privacy) is invoked on copy_from_auth_priv. The 791 RPCSEC_GSS3_CREATE procedure's arguments are: 793 struct { 794 rpc_gss3_gss_binding *compound_binding; 795 rpc_gss3_chan_binding *chan_binding_mic; 796 rpc_gss3_assertion assertions<>; 797 rpc_gss3_extension extensions<>; 798 } rpc_gss3_create_args; 800 The string "copy_from_auth" is placed in assertions[0].privs. The 801 output of GSS_Wrap() is placed in extensions[0].data. The field 802 extensions[0].critical is set to TRUE. The source server calls 803 GSS_Unwrap() on the privilege, and verifies that the seq_num 804 matches the credential. It then verifies that the NFSv4 user id 805 being asserted matches the source server's mapping of the user 806 principal. If it does, the privilege is established on the source 807 server as: <"copy_from_auth", user id, destination>. The 808 successful reply to RPCSEC_GSS3_CREATE has: 810 struct { 811 opaque handle<>; 812 rpc_gss3_chan_binding *chan_binding_mic; 813 rpc_gss3_assertion granted_assertions<>; 814 rpc_gss3_assertion server_assertions<>; 815 rpc_gss3_extension extensions<>; 816 } rpc_gss3_create_res; 818 The field "handle" is the RPCSEC_GSSv3 handle that the client will 819 use on COPY_NOTIFY requests involving the source and destination 820 server. granted_assertions[0].privs will be equal to 821 "copy_from_auth". The server will return a GSS_Wrap() of 822 copy_to_auth_priv. 824 o An instance of copy_to_auth_priv is filled in with the shared 825 secret, the source server, and the NFSv4 user id. It will be sent 826 with an RPCSEC_GSS3_CREATE procedure, and so ctap_seq_num is set 827 to the seq_num of the credential of the RPCSEC_GSS3_CREATE 828 procedure. Because ctap_shared_secret is a secret, after XDR 829 encoding copy_to_auth_priv, GSS_Wrap() is invoked on 830 copy_to_auth_priv. The RPCSEC_GSS3_CREATE procedure's arguments 831 are: 833 struct { 834 rpc_gss3_gss_binding *compound_binding; 835 rpc_gss3_chan_binding *chan_binding_mic; 836 rpc_gss3_assertion assertions<>; 837 rpc_gss3_extension extensions<>; 838 } rpc_gss3_create_args; 840 The string "copy_to_auth" is placed in assertions[0].privs. The 841 output of GSS_Wrap() is placed in extensions[0].data. The field 842 extensions[0].critical is set to TRUE. After unwrapping, 843 verifying the seq_num, and the user principal to NFSv4 user ID 844 mapping, the destination establishes a privilege of 845 <"copy_to_auth", user id, source>. The successful reply to 846 RPCSEC_GSS3_CREATE has: 848 struct { 849 opaque handle<>; 850 rpc_gss3_chan_binding *chan_binding_mic; 851 rpc_gss3_assertion granted_assertions<>; 852 rpc_gss3_assertion server_assertions<>; 853 rpc_gss3_extension extensions<>; 854 } rpc_gss3_create_res; 856 The field "handle" is the RPCSEC_GSSv3 handle that the client will 857 use on COPY requests involving the source and destination server. 858 The field granted_assertions[0].privs will be equal to 859 "copy_to_auth". The server will return a GSS_Wrap() of 860 copy_to_auth_priv. 862 2.4.1.2.2. Starting a Secure Inter-Server Copy 864 When the client sends a COPY_NOTIFY request to the source server, it 865 uses the privileged "copy_from_auth" RPCSEC_GSSv3 handle. 866 cna_destination_server in COPY_NOTIFY MUST be the same as the name of 867 the destination server specified in copy_from_auth_priv. Otherwise, 868 COPY_NOTIFY will fail with NFS4ERR_ACCESS. The source server 869 verifies that the privilege <"copy_from_auth", user id, destination> 870 exists, and annotates it with the source filehandle, if the user 871 principal has read access to the source file, and if administrative 872 policies give the user principal and the NFS client read access to 873 the source file (i.e., if the ACCESS operation would grant read 874 access). Otherwise, COPY_NOTIFY will fail with NFS4ERR_ACCESS. 876 When the client sends a COPY request to the destination server, it 877 uses the privileged "copy_to_auth" RPCSEC_GSSv3 handle. 878 ca_source_server in COPY MUST be the same as the name of the source 879 server specified in copy_to_auth_priv. Otherwise, COPY will fail 880 with NFS4ERR_ACCESS. The destination server verifies that the 881 privilege <"copy_to_auth", user id, source> exists, and annotates it 882 with the source and destination filehandles. If the client has 883 failed to establish the "copy_to_auth" policy it will reject the 884 request with NFS4ERR_PARTNER_NO_AUTH. 886 If the client sends a COPY_REVOKE to the source server to rescind the 887 destination server's copy privilege, it uses the privileged 888 "copy_from_auth" RPCSEC_GSSv3 handle and the cra_destination_server 889 in COPY_REVOKE MUST be the same as the name of the destination server 890 specified in copy_from_auth_priv. The source server will then delete 891 the <"copy_from_auth", user id, destination> privilege and fail any 892 subsequent copy requests sent under the auspices of this privilege 893 from the destination server. 895 2.4.1.2.3. Securing ONC RPC Server-to-Server Copy Protocols 897 After a destination server has a "copy_to_auth" privilege established 898 on it, and it receives a COPY request, if it knows it will use an ONC 899 RPC protocol to copy data, it will establish a "copy_confirm_auth" 900 privilege on the source server, using nfs@ as the 901 initiator principal, and nfs@ as the target principal. 903 The value of the field ccap_shared_secret_mic is a GSS_VerifyMIC() of 904 the shared secret passed in the copy_to_auth privilege. The field 905 ccap_username is the mapping of the user principal to an NFSv4 user 906 name ("user"@"domain" form), and MUST be the same as ctap_username 907 and cfap_username. The field ccap_seq_num is the seq_num of the 908 RPCSEC_GSSv3 credential used for the RPCSEC_GSS3_CREATE procedure the 909 destination will send to the source server to establish the 910 privilege. 912 The source server verifies the privilege, and establishes a 913 <"copy_confirm_auth", user id, destination> privilege. If the source 914 server fails to verify the privilege, the COPY operation will be 915 rejected with NFS4ERR_PARTNER_NO_AUTH. All subsequent ONC RPC 916 requests sent from the destination to copy data from the source to 917 the destination will use the RPCSEC_GSSv3 handle returned by the 918 source's RPCSEC_GSS3_CREATE response. 920 Note that the use of the "copy_confirm_auth" privilege accomplishes 921 the following: 923 o if a protocol like NFS is being used, with export policies, export 924 policies can be overridden in case the destination server as-an- 925 NFS-client is not authorized 927 o manual configuration to allow a copy relationship between the 928 source and destination is not needed. 930 If the attempt to establish a "copy_confirm_auth" privilege fails, 931 then when the user principal sends a COPY request to destination, the 932 destination server will reject it with NFS4ERR_PARTNER_NO_AUTH. 934 2.4.1.2.4. Securing Non ONC RPC Server-to-Server Copy Protocols 936 If the destination won't be using ONC RPC to copy the data, then the 937 source and destination are using an unspecified copy protocol. The 938 destination could use the shared secret and the NFSv4 user id to 939 prove to the source server that the user principal has authorized the 940 copy. 942 For protocols that authenticate user names with passwords (e.g., HTTP 943 [14] and FTP [15]), the nfsv4 user id could be used as the user name, 944 and an ASCII hexadecimal representation of the RPCSEC_GSSv3 shared 945 secret could be used as the user password or as input into non- 946 password authentication methods like CHAP [16]. 948 2.4.1.3. Inter-Server Copy via ONC RPC but without RPCSEC_GSSv3 950 ONC RPC security flavors other than RPCSEC_GSSv3 MAY be used with the 951 server-side copy offload operations described in this document. In 952 particular, host-based ONC RPC security flavors such as AUTH_NONE and 953 AUTH_SYS MAY be used. If a host-based security flavor is used, a 954 minimal level of protection for the server-to-server copy protocol is 955 possible. 957 In the absence of strong security mechanisms such as RPCSEC_GSSv3, 958 the challenge is how the source server and destination server 959 identify themselves to each other, especially in the presence of 960 multi-homed source and destination servers. In a multi-homed 961 environment, the destination server might not contact the source 962 server from the same network address specified by the client in the 963 COPY_NOTIFY. This can be overcome using the procedure described 964 below. 966 When the client sends the source server the COPY_NOTIFY operation, 967 the source server may reply to the client with a list of target 968 addresses, names, and/or URLs and assign them to the unique 969 quadruple: . If the destination uses one of these target netlocs to contact 971 the source server, the source server will be able to uniquely 972 identify the destination server, even if the destination server does 973 not connect from the address specified by the client in COPY_NOTIFY. 974 The level of assurance in this identification depends on the 975 unpredictability, strength and secrecy of the random number. 977 For example, suppose the network topology is as shown in Figure 3. 978 If the source filehandle is 0x12345, the source server may respond to 979 a COPY_NOTIFY for destination 10.11.78.56 with the URLs: 981 nfs://10.11.78.18//_COPY/FvhH1OKbu8VrxvV1erdjvR7N/10.11.78.56/_FH/ 982 0x12345 984 nfs://192.168.33.18//_COPY/FvhH1OKbu8VrxvV1erdjvR7N/10.11.78.56/ 985 _FH/0x12345 987 The name component after _COPY is 24 characters of base 64, more than 988 enough to encode a 128 bit random number. 990 The client will then send these URLs to the destination server in the 991 COPY operation. Suppose that the 192.168.33.0/24 network is a high 992 speed network and the destination server decides to transfer the file 993 over this network. If the destination contacts the source server 994 from 192.168.33.56 over this network using NFSv4.1, it does the 995 following: 997 COMPOUND { PUTROOTFH, LOOKUP "_COPY" ; LOOKUP 998 "FvhH1OKbu8VrxvV1erdjvR7N" ; LOOKUP "10.11.78.56"; LOOKUP "_FH" ; 999 OPEN "0x12345" ; GETFH } 1001 Provided that the random number is unpredictable and has been kept 1002 secret by the parties involved, the source server will therefore know 1003 that these NFSv4.x operations are being issued by the destination 1004 server identified in the COPY_NOTIFY. This random number technique 1005 only provides initial authentication of the destination server, and 1006 cannot defend against man-in-the-middle attacks after authentication 1007 or an eavesdropper that observes the random number on the wire. 1008 Other secure communication techniques (e.g., IPsec) are necessary to 1009 block these attacks. 1011 2.4.1.4. Inter-Server Copy without ONC RPC and RPCSEC_GSSv3 1013 The same techniques as Section 2.4.1.3, using unique URLs for each 1014 destination server, can be used for other protocols (e.g., HTTP [14] 1015 and FTP [15]) as well. 1017 3. Sparse Files 1019 3.1. Introduction 1021 A sparse file is a common way of representing a large file without 1022 having to utilize all of the disk space for it. Consequently, a 1023 sparse file uses less physical space than its size indicates. This 1024 means the file contains 'holes', byte ranges within the file that 1025 contain no data. Most modern file systems support sparse files, 1026 including most UNIX file systems and NTFS, but notably not Apple's 1027 HFS+. Common examples of sparse files include Virtual Machine (VM) 1028 OS/disk images, database files, log files, and even checkpoint 1029 recovery files most commonly used by the HPC community. 1031 If an application reads a hole in a sparse file, the file system must 1032 return all zeros to the application. For local data access there is 1033 little penalty, but with NFS these zeroes must be transferred back to 1034 the client. If an application uses the NFS client to read data into 1035 memory, this wastes time and bandwidth as the application waits for 1036 the zeroes to be transferred. 1038 A sparse file is typically created by initializing the file to be all 1039 zeros - nothing is written to the data in the file, instead the hole 1040 is recorded in the metadata for the file. So a 8G disk image might 1041 be represented initially by a couple hundred bits in the inode and 1042 nothing on the disk. If the VM then writes 100M to a file in the 1043 middle of the image, there would now be two holes represented in the 1044 metadata and 100M in the data. 1046 This section introduces a new operation READ_PLUS (Section 12.10) 1047 which supports all the features of READ but includes an extension to 1048 support sparse pattern files. READ_PLUS is guaranteed to perform no 1049 worse than READ, and can dramatically improve performance with sparse 1050 files. READ_PLUS does not depend on pNFS protocol features, but can 1051 be used by pNFS to support sparse files. 1053 3.2. Terminology 1055 Regular file: An object of file type NF4REG or NF4NAMEDATTR. 1057 Sparse file: A Regular file that contains one or more Holes. 1059 Hole: A byte range within a Sparse file that contains regions of all 1060 zeroes. For block-based file systems, this could also be an 1061 unallocated region of the file. 1063 Hole Threshold: The minimum length of a Hole as determined by the 1064 server. If a server chooses to define a Hole Threshold, then it 1065 would not return hole information about holes with a length 1066 shorter than the Hole Threshold. 1068 3.3. Determining the next hole/data 1070 Solaris and ZFS support an extension to lseek(2) that allows 1071 applications to discover holes in a file. The values, SEEK_HOLE and 1072 SEEK_DATA, allow clients to seek to the next hole or beginning of 1073 data, respectively. 1075 4. Space Reservation 1077 4.1. Introduction 1079 This section describes a set of operations that allow applications 1080 such as hypervisors to reserve space for a file, report the amount of 1081 actual disk space a file occupies and freeup the backing space of a 1082 file when it is not required. In virtualized environments, virtual 1083 disk files are often stored on NFS mounted volumes. Since virtual 1084 disk files represent the hard disks of virtual machines, hypervisors 1085 often have to guarantee certain properties for the file. 1087 One such example is space reservation. When a hypervisor creates a 1088 virtual disk file, it often tries to preallocate the space for the 1089 file so that there are no future allocation related errors during the 1090 operation of the virtual machine. Such errors prevent a virtual 1091 machine from continuing execution and result in downtime. 1093 Currently, in order to achieve such a guarantee, applications zero 1094 the entire file. The initial zeroing allocates the backing blocks 1095 and all subsequent writes are overwrites of already allocated blocks. 1096 This approach is not only inefficient in terms of the amount of I/O 1097 done, it is also not guaranteed to work on filesystems that are log 1098 structured or deduplicated. An efficient way of guaranteeing space 1099 reservation would be beneficial to such applications. 1101 If the space_reserved attribute is set on a file, it is guaranteed 1102 that writes that do not grow the file will not fail with 1103 NFSERR_NOSPC. 1105 Another useful feature would be the ability to report the number of 1106 blocks that would be freed when a file is deleted. Currently, NFS 1107 reports two size attributes: 1109 size The logical file size of the file. 1111 space_used The size in bytes that the file occupies on disk 1113 While these attributes are sufficient for space accounting in 1114 traditional filesystems, they prove to be inadequate in modern 1115 filesystems that support block sharing. In such filesystems, 1116 multiple inodes can point to a single block with a block reference 1117 count to guard against premature freeing. Having a way to tell the 1118 number of blocks that would be freed if the file was deleted would be 1119 useful to applications that wish to migrate files when a volume is 1120 low on space. 1122 Since virtual disks represent a hard drive in a virtual machine, a 1123 virtual disk can be viewed as a filesystem within a file. Since not 1124 all blocks within a filesystem are in use, there is an opportunity to 1125 reclaim blocks that are no longer in use. A call to deallocate 1126 blocks could result in better space efficiency. Lesser space MAY be 1127 consumed for backups after block deallocation. 1129 The following operations and attributes can be used to resolve this 1130 issues: 1132 space_reserved This attribute specifies whether the blocks backing 1133 the file have been preallocated. 1135 space_freed This attribute specifies the space freed when a file is 1136 deleted, taking block sharing into consideration. 1138 INITIALIZED This operation zeroes and/or deallocates the blocks 1139 backing a region of the file. 1141 If space_used of a file is interpreted to mean the size in bytes of 1142 all disk blocks pointed to by the inode of the file, then shared 1143 blocks get double counted, over-reporting the space utilization. 1144 This also has the adverse effect that the deletion of a file with 1145 shared blocks frees up less than space_used bytes. 1147 On the other hand, if space_used is interpreted to mean the size in 1148 bytes of those disk blocks unique to the inode of the file, then 1149 shared blocks are not counted in any file, resulting in under- 1150 reporting of the space utilization. 1152 For example, two files A and B have 10 blocks each. Let 6 of these 1153 blocks be shared between them. Thus, the combined space utilized by 1154 the two files is 14 * BLOCK_SIZE bytes. In the former case, the 1155 combined space utilization of the two files would be reported as 20 * 1156 BLOCK_SIZE. However, deleting either would only result in 4 * 1157 BLOCK_SIZE being freed. Conversely, the latter interpretation would 1158 report that the space utilization is only 8 * BLOCK_SIZE. 1160 Adding another size attribute, space_freed, is helpful in solving 1161 this problem. space_freed is the number of blocks that are allocated 1162 to the given file that would be freed on its deletion. In the 1163 example, both A and B would report space_freed as 4 * BLOCK_SIZE and 1164 space_used as 10 * BLOCK_SIZE. If A is deleted, B will report 1165 space_freed as 10 * BLOCK_SIZE as the deletion of B would result in 1166 the deallocation of all 10 blocks. 1168 The addition of this problem doesn't solve the problem of space being 1169 over-reported. However, over-reporting is better than under- 1170 reporting. 1172 5. Support for Application IO Hints 1174 5.1. Introduction 1176 Applications currently have several options for communicating I/O 1177 access patterns to the NFS client. While this can help the NFS 1178 client optimize I/O and caching for a file, it does not allow the NFS 1179 server and its exported file system to do likewise. Therefore, here 1180 we put forth a proposal for the NFSv4.2 protocol to allow 1181 applications to communicate their expected behavior to the server. 1183 By communicating expected access pattern, e.g., sequential or random, 1184 and data re-use behavior, e.g., data range will be read multiple 1185 times and should be cached, the server will be able to better 1186 understand what optimizations it should implement for access to a 1187 file. For example, if a application indicates it will never read the 1188 data more than once, then the file system can avoid polluting the 1189 data cache and not cache the data. 1191 The first application that can issue client I/O hints is the 1192 posix_fadvise operation. For example, on Linux, when an application 1193 uses posix_fadvise to specify a file will be read sequentially, Linux 1194 doubles the readahead buffer size. 1196 Another instance where applications provide an indication of their 1197 desired I/O behavior is the use of direct I/O. By specifying direct 1198 I/O, clients will no longer cache data, but this information is not 1199 passed to the server, which will continue caching data. 1201 Application specific NFS clients such as those used by hypervisors 1202 and databases can also leverage application hints to communicate 1203 their specialized requirements. 1205 This section adds a new IO_ADVISE operation to communicate the client 1206 file access patterns to the NFS server. The NFS server upon 1207 receiving a IO_ADVISE operation MAY choose to alter its I/O and 1208 caching behavior, but is under no obligation to do so. 1210 5.2. POSIX Requirements 1212 The first key requirement of the IO_ADVISE operation is to support 1213 the posix_fadvise function [6], which is supported in Linux and many 1214 other operating systems. Examples and guidance on how to use 1215 posix_fadvise to improve performance can be found here [17]. 1216 posix_fadvise is defined as follows, 1218 int posix_fadvise(int fd, off_t offset, off_t len, int advice); 1220 The posix_fadvise() function shall advise the implementation on the 1221 expected behavior of the application with respect to the data in the 1222 file associated with the open file descriptor, fd, starting at offset 1223 and continuing for len bytes. The specified range need not currently 1224 exist in the file. If len is zero, all data following offset is 1225 specified. The implementation may use this information to optimize 1226 handling of the specified data. The posix_fadvise() function shall 1227 have no effect on the semantics of other operations on the specified 1228 data, although it may affect the performance of other operations. 1230 The advice to be applied to the data is specified by the advice 1231 parameter and may be one of the following values: 1233 POSIX_FADV_NORMAL - Specifies that the application has no advice to 1234 give on its behavior with respect to the specified data. It is 1235 the default characteristic if no advice is given for an open file. 1237 POSIX_FADV_SEQUENTIAL - Specifies that the application expects to 1238 access the specified data sequentially from lower offsets to 1239 higher offsets. 1241 POSIX_FADV_RANDOM - Specifies that the application expects to access 1242 the specified data in a random order. 1244 POSIX_FADV_WILLNEED - Specifies that the application expects to 1245 access the specified data in the near future. 1247 POSIX_FADV_DONTNEED - Specifies that the application expects that it 1248 will not access the specified data in the near future. 1250 POSIX_FADV_NOREUSE - Specifies that the application expects to 1251 access the specified data once and then not reuse it thereafter. 1253 Upon successful completion, posix_fadvise() shall return zero; 1254 otherwise, an error number shall be returned to indicate the error. 1256 5.3. Additional Requirements 1258 Many use cases exist for sending application I/O hints to the server 1259 that cannot utilize the POSIX supported interface. This is because 1260 some applications may benefit from additional hints not specified by 1261 posix_fadvise, and some applications may not use POSIX altogether. 1263 One use case is "Opportunistic Prefetch", which allows a stateid 1264 holder to tell the server that it is possible that it will access the 1265 specified data in the near future. This is similar to 1266 POSIX_FADV_WILLNEED, but the client is unsure it will in fact read 1267 the specified data, so the server should only prefetch the data if it 1268 can be done at a marginal cost. For example, when a server receives 1269 this hint, it could prefetch only the indirect blocks for a file 1270 instead of all the data. This would still improve performance if the 1271 client does read the data, but with less pressure on server memory. 1273 An example use case for this hint is a database that reads in a 1274 single record that points to additional records in either other areas 1275 of the same file or different files located on the same or different 1276 server. While it is likely that the application may access the 1277 additional records, it is far from guaranteed. Therefore, the 1278 database may issue an opportunistic prefetch (instead of 1279 POSIX_FADV_WILLNEED) for the data in the other files pointed to by 1280 the record. 1282 Another use case is "Direct I/O", which allows a stated holder to 1283 inform the server that it does not wish to cache data. Today, for 1284 applications that only intend to read data once, the use of direct 1285 I/O disables client caching, but does not affect server caching. By 1286 caching data that will not be re-read, the server is polluting its 1287 cache and possibly causing useful cached data to be evicted. By 1288 informing the server of its expected I/O access, this situation can 1289 be avoid. Direct I/O can be used in Linux and AIX via the open() 1290 O_DIRECT parameter, in Solaris via the directio() function, and in 1291 Windows via the CreateFile() FILE_FLAG_NO_BUFFERING flag. 1293 Another use case is "Backward Sequential Read", which allows a stated 1294 holder to inform the server that it intends to read the specified 1295 data backwards, i.e., back the end to the beginning. This is 1296 different than POSIX_FADV_SEQUENTIAL, whose implied intention was 1297 that data will be read from beginning to end. This hint allows 1298 servers to prefetch data at the end of the range first, and then 1299 prefetch data sequentially in a backwards manner to the start of the 1300 data range. One example of an application that can make use of this 1301 hint is video editing. 1303 5.4. Security Considerations 1305 None. 1307 5.5. IANA Considerations 1309 The IO_ADVISE_type4 will be extended through an IANA registry. 1311 6. Application Data Block Support 1313 At the OS level, files are contained on disk blocks. Applications 1314 are also free to impose structure on the data contained in a file and 1315 we can define an Application Data Block (ADB) to be such a structure. 1316 From the application's viewpoint, it only wants to handle ADBs and 1317 not raw bytes (see [18]). An ADB is typically comprised of two 1318 sections: a header and data. The header describes the 1319 characteristics of the block and can provide a means to detect 1320 corruption in the data payload. The data section is typically 1321 initialized to all zeros. 1323 The format of the header is application specific, but there are two 1324 main components typically encountered: 1326 1. An ADB Number (ADBN), which allows the application to determine 1327 which data block is being referenced. The ADBN is a logical 1328 block number and is useful when the client is not storing the 1329 blocks in contiguous memory. 1331 2. Fields to describe the state of the ADB and a means to detect 1332 block corruption. For both pieces of data, a useful property is 1333 that allowed values be unique in that if passed across the 1334 network, corruption due to translation between big and little 1335 endian architectures are detectable. For example, 0xF0DEDEF0 has 1336 the same bit pattern in both architectures. 1338 Applications already impose structures on files [18] and detect 1339 corruption in data blocks [19]. What they are not able to do is 1340 efficiently transfer and store ADBs. To initialize a file with ADBs, 1341 the client must send the full ADB to the server and that must be 1342 stored on the server. When the application is initializing a file to 1343 have the ADB structure, it could compress the ADBs to just the 1344 information to necessary to later reconstruct the header portion of 1345 the ADB when the contents are read back. Using sparse file 1346 techniques, the disk blocks described by would not be allocated. 1347 Unlike sparse file techniques, there would be a small cost to store 1348 the compressed header data. 1350 In this section, we are going to define a generic framework for an 1351 ADB, present one approach to detecting corruption in a given ADB 1352 implementation, and describe the model for how the client and server 1353 can support efficient initialization of ADBs, reading of ADB holes, 1354 punching holes in ADBs, and space reservation. Further, we need to 1355 be able to extend this model to applications which do not support 1356 ADBs, but wish to be able to handle sparse files, hole punching, and 1357 space reservation. 1359 6.1. Generic Framework 1361 We want the representation of the ADB to be flexible enough to 1362 support many different applications. The most basic approach is no 1363 imposition of a block at all, which means we are working with the raw 1364 bytes. Such an approach would be useful for storing holes, punching 1365 holes, etc. In more complex deployments, a server might be 1366 supporting multiple applications, each with their own definition of 1367 the ADB. One might store the ADBN at the start of the block and then 1368 have a guard pattern to detect corruption [20]. The next might store 1369 the ADBN at an offset of 100 bytes within the block and have no guard 1370 pattern at all. The point is that existing applications might 1371 already have well defined formats for their data blocks. 1373 The guard pattern can be used to represent the state of the block, to 1374 protect against corruption, or both. Again, it needs to be able to 1375 be placed anywhere within the ADB. 1377 We need to be able to represent the starting offset of the block and 1378 the size of the block. Note that nothing prevents the application 1379 from defining different sized blocks in a file. 1381 6.1.1. Data Block Representation 1383 struct app_data_block4 { 1384 offset4 adb_offset; 1385 length4 adb_block_size; 1386 length4 adb_block_count; 1387 length4 adb_reloff_blocknum; 1388 count4 adb_block_num; 1389 length4 adb_reloff_pattern; 1390 opaque adb_pattern<>; 1391 }; 1392 The app_data_block4 structure captures the abstraction presented for 1393 the ADB. The additional fields present are to allow the transmission 1394 of adb_block_count ADBs at one time. We also use adb_block_num to 1395 convey the ADBN of the first block in the sequence. Each ADB will 1396 contain the same adb_pattern string. 1398 As both adb_block_num and adb_pattern are optional, if either 1399 adb_reloff_pattern or adb_reloff_blocknum is set to NFS4_UINT64_MAX, 1400 then the corresponding field is not set in any of the ADB. 1402 6.1.2. Data Content 1404 /* 1405 * Use an enum such that we can extend new types. 1406 */ 1407 enum data_content4 { 1408 NFS4_CONTENT_DATA = 0, 1409 NFS4_CONTENT_APP_BLOCK = 1, 1410 NFS4_CONTENT_HOLE = 2 1411 }; 1413 New operations might need to differentiate between wanting to access 1414 data versus an ADB. Also, future minor versions might want to 1415 introduce new data formats. This enumeration allows that to occur. 1417 6.2. pNFS Considerations 1419 While this document does not mandate how sparse ADBs are recorded on 1420 the server, it does make the assumption that such information is not 1421 in the file. I.e., the information is metadata. As such, the 1422 INITIALIZE operation is defined to be not supported by the DS - it 1423 must be issued to the MDS. But since the client must not assume a 1424 priori whether a read is sparse or not, the READ_PLUS operation MUST 1425 be supported by both the DS and the MDS. I.e., the client might 1426 impose on the MDS to asynchronously read the data from the DS. 1428 Furthermore, each DS MUST not report to a client either a sparse ADB 1429 or data which belongs to another DS. One implication of this 1430 requirement is that the app_data_block4's adb_block_size MUST be 1431 either be the stripe width or the stripe width must be an even 1432 multiple of it. 1434 The second implication here is that the DS must be able to use the 1435 Control Protocol to determine from the MDS where the sparse ADBs 1436 occur. [[Comment.4: Need to discuss what happens if after the file 1437 is being written to and an INITIALIZE occurs? --TH]] Perhaps instead 1438 of the DS pulling from the MDS, the MDS pushes to the DS? Thus an 1439 INITIALIZE causes a new push? [[Comment.5: Still need to consider 1440 race cases of the DS getting a WRITE and the MDS getting an 1441 INITIALIZE. --TH]] 1443 6.3. An Example of Detecting Corruption 1445 In this section, we define an ADB format in which corruption can be 1446 detected. Note that this is just one possible format and means to 1447 detect corruption. 1449 Consider a very basic implementation of an operating system's disk 1450 blocks. A block is either data or it is an indirect block which 1451 allows for files to be larger than one block. It is desired to be 1452 able to initialize a block. Lastly, to quickly unlink a file, a 1453 block can be marked invalid. The contents remain intact - which 1454 would enable this OS application to undelete a file. 1456 The application defines 4k sized data blocks, with an 8 byte block 1457 counter occurring at offset 0 in the block, and with the guard 1458 pattern occurring at offset 8 inside the block. Furthermore, the 1459 guard pattern can take one of four states: 1461 0xfeedface - This is the FREE state and indicates that the ADB 1462 format has been applied. 1464 0xcafedead - This is the DATA state and indicates that real data 1465 has been written to this block. 1467 0xe4e5c001 - This is the INDIRECT state and indicates that the 1468 block contains block counter numbers that are chained off of this 1469 block. 1471 0xba1ed4a3 - This is the INVALID state and indicates that the block 1472 contains data whose contents are garbage. 1474 Finally, it also defines an 8 byte checksum [21] starting at byte 16 1475 which applies to the remaining contents of the block. If the state 1476 is FREE, then that checksum is trivially zero. As such, the 1477 application has no need to transfer the checksum implicitly inside 1478 the ADB - it need not make the transfer layer aware of the fact that 1479 there is a checksum (see [19] for an example of checksums used to 1480 detect corruption in application data blocks). 1482 Corruption in each ADB can be detected thusly: 1484 o If the guard pattern is anything other than one of the allowed 1485 values, including all zeros. 1487 o If the guard pattern is FREE and any other byte in the remainder 1488 of the ADB is anything other than zero. 1490 o If the guard pattern is anything other than FREE, then if the 1491 stored checksum does not match the computed checksum. 1493 o If the guard pattern is INDIRECT and one of the stored indirect 1494 block numbers has a value greater than the number of ADBs in the 1495 file. 1497 o If the guard pattern is INDIRECT and one of the stored indirect 1498 block numbers is a duplicate of another stored indirect block 1499 number. 1501 As can be seen, the application can detect errors based on the 1502 combination of the guard pattern state and the checksum. But also, 1503 the application can detect corruption based on the state and the 1504 contents of the ADB. This last point is important in validating the 1505 minimum amount of data we incorporated into our generic framework. 1506 I.e., the guard pattern is sufficient in allowing applications to 1507 design their own corruption detection. 1509 Finally, it is important to note that none of these corruption checks 1510 occur in the transport layer. The server and client components are 1511 totally unaware of the file format and might report everything as 1512 being transferred correctly even in the case the application detects 1513 corruption. 1515 6.4. Example of READ_PLUS 1517 The hypothetical application presented in Section 6.3 can be used to 1518 illustrate how READ_PLUS would return an array of results. A file is 1519 created and initialized with 100 4k ADBs in the FREE state: 1521 INITIALIZE {0, 4k, 100, 0, 0, 8, 0xfeedface} 1523 Further, assume the application writes a single ADB at 16k, changing 1524 the guard pattern to 0xcafedead, we would then have in memory: 1526 0 -> (16k - 1) : 4k, 4, 0, 0, 8, 0xfeedface 1527 16k -> (20k - 1) : 00 00 00 05 ca fe de ad XX XX ... XX XX 1528 20k -> 400k : 4k, 95, 0, 6, 0xfeedface 1530 And when the client did a READ_PLUS of 64k at the start of the file, 1531 it would get back a result of an ADB, some data, and a final ADB: 1533 ADB {0, 4, 0, 0, 8, 0xfeedface} 1534 data 4k 1535 ADB {20k, 4k, 59, 0, 6, 0xfeedface} 1537 6.5. Zero Filled Holes 1539 As applications are free to define the structure of an ADB, it is 1540 trivial to define an ADB which supports zero filled holes. Such a 1541 case would encompass the traditional definitions of a sparse file and 1542 hole punching. For example, to punch a 64k hole, starting at 100M, 1543 into an existing file which has no ADB structure: 1545 INITIALIZE {100M, 64k, 1, NFS4_UINT64_MAX, 1546 0, NFS4_UINT64_MAX, 0x0} 1548 7. Labeled NFS 1550 7.1. Introduction 1552 Access control models such as Unix permissions or Access Control 1553 Lists are commonly referred to as Discretionary Access Control (DAC) 1554 models. These systems base their access decisions on user identity 1555 and resource ownership. In contrast Mandatory Access Control (MAC) 1556 models base their access control decisions on the label on the 1557 subject (usually a process) and the object it wishes to access. 1558 These labels may contain user identity information but usually 1559 contain additional information. In DAC systems users are free to 1560 specify the access rules for resources that they own. MAC models 1561 base their security decisions on a system wide policy established by 1562 an administrator or organization which the users do not have the 1563 ability to override. In this section, we add a MAC model to NFSv4. 1565 The first change necessary is to devise a method for transporting and 1566 storing security label data on NFSv4 file objects. Security labels 1567 have several semantics that are met by NFSv4 recommended attributes 1568 such as the ability to set the label value upon object creation. 1569 Access control on these attributes are done through a combination of 1570 two mechanisms. As with other recommended attributes on file objects 1571 the usual DAC checks (ACLs and permission bits) will be performed to 1572 ensure that proper file ownership is enforced. In addition a MAC 1573 system MAY be employed on the client, server, or both to enforce 1574 additional policy on what subjects may modify security label 1575 information. 1577 The second change is to provide a method for the server to notify the 1578 client that the attribute changed on an open file on the server. If 1579 the file is closed, then during the open attempt, the client will 1580 gather the new attribute value. The server MUST not communicate the 1581 new value of the attribute, the client MUST query it. This 1582 requirement stems from the need for the client to provide sufficient 1583 access rights to the attribute. 1585 The final change necessary is a modification to the RPC layer used in 1586 NFSv4 in the form of a new version of the RPCSEC_GSS [7] framework. 1587 In order for an NFSv4 server to apply MAC checks it must obtain 1588 additional information from the client. Several methods were 1589 explored for performing this and it was decided that the best 1590 approach was to incorporate the ability to make security attribute 1591 assertions through the RPC mechanism. RPCSECGSSv3 [5] outlines a 1592 method to assert additional security information such as security 1593 labels on gss context creation and have that data bound to all RPC 1594 requests that make use of that context. 1596 7.2. Definitions 1598 Label Format Specifier (LFS): is an identifier used by the client to 1599 establish the syntactic format of the security label and the 1600 semantic meaning of its components. These specifiers exist in a 1601 registry associated with documents describing the format and 1602 semantics of the label. 1604 Label Format Registry: is the IANA registry containing all 1605 registered LFS along with references to the documents that 1606 describe the syntactic format and semantics of the security label. 1608 Policy Identifier (PI): is an optional part of the definition of a 1609 Label Format Specifier which allows for clients and server to 1610 identify specific security policies. 1612 Domain of Interpretation (DOI): represents an administrative 1613 security boundary, where all systems within the DOI have 1614 semantically coherent labeling. That is, a security attribute 1615 must always mean exactly the same thing anywhere within the DOI. 1617 Object: is a passive resource within the system that we wish to be 1618 protected. Objects can be entities such as files, directories, 1619 pipes, sockets, and many other system resources relevant to the 1620 protection of the system state. 1622 Subject: A subject is an active entity usually a process which is 1623 requesting access to an object. 1625 Multi-Level Security (MLS): is a traditional model where objects are 1626 given a sensitivity level (Unclassified, Secret, Top Secret, etc) 1627 and a category set [22]. 1629 7.3. MAC Security Attribute 1631 MAC models base access decisions on security attributes bound to 1632 subjects and objects. This information can range from a user 1633 identity for an identity based MAC model, sensitivity levels for 1634 Multi-level security, or a type for Type Enforcement. These models 1635 base their decisions on different criteria but the semantics of the 1636 security attribute remain the same. The semantics required by the 1637 security attributes are listed below: 1639 o Must provide flexibility with respect to MAC model. 1641 o Must provide the ability to atomically set security information 1642 upon object creation 1644 o Must provide the ability to enforce access control decisions both 1645 on the client and the server 1647 o Must not expose an object to either the client or server name 1648 space before its security information has been bound to it. 1650 NFSv4 implements the security attribute as a recommended attribute. 1651 These attributes have a fixed format and semantics, which conflicts 1652 with the flexible nature of the security attribute. To resolve this 1653 the security attribute consists of two components. The first 1654 component is a LFS as defined in [23] to allow for interoperability 1655 between MAC mechanisms. The second component is an opaque field 1656 which is the actual security attribute data. To allow for various 1657 MAC models NFSv4 should be used solely as a transport mechanism for 1658 the security attribute. It is the responsibility of the endpoints to 1659 consume the security attribute and make access decisions based on 1660 their respective models. In addition, creation of objects through 1661 OPEN and CREATE allows for the security attribute to be specified 1662 upon creation. By providing an atomic create and set operation for 1663 the security attribute it is possible to enforce the second and 1664 fourth requirements. The recommended attribute FATTR4_SEC_LABEL will 1665 be used to satisfy this requirement. 1667 7.3.1. Interpreting FATTR4_SEC_LABEL 1669 The XDR [24] necessary to implement Labeled NFSv4 is presented below: 1671 const FATTR4_SEC_LABEL = 81; 1673 typedef uint32_t policy4; 1675 Figure 6 1677 struct labelformat_spec4 { 1678 policy4 lfs_lfs; 1679 policy4 lfs_pi; 1680 }; 1682 struct sec_label_attr_info { 1683 labelformat_spec4 slai_lfs; 1684 opaque slai_data<>; 1685 }; 1687 The FATTR4_SEC_LABEL contains an array of two components with the 1688 first component being an LFS. It serves to provide the receiving end 1689 with the information necessary to translate the security attribute 1690 into a form that is usable by the endpoint. Label Formats assigned 1691 an LFS may optionally choose to include a Policy Identifier field to 1692 allow for complex policy deployments. The LFS and Label Format 1693 Registry are described in detail in [23]. The translation used to 1694 interpret the security attribute is not specified as part of the 1695 protocol as it may depend on various factors. The second component 1696 is an opaque section which contains the data of the attribute. This 1697 component is dependent on the MAC model to interpret and enforce. 1699 In particular, it is the responsibility of the LFS specification to 1700 define a maximum size for the opaque section, slai_data<>. When 1701 creating or modifying a label for an object, the client needs to be 1702 guaranteed that the server will accept a label that is sized 1703 correctly. By both client and server being part of a specific MAC 1704 model, the client will be aware of the size. 1706 7.3.2. Delegations 1708 In the event that a security attribute is changed on the server while 1709 a client holds a delegation on the file, the client should follow the 1710 existing protocol with respect to attribute changes. It should flush 1711 all changes back to the server and relinquish the delegation. 1713 7.3.3. Permission Checking 1715 It is not feasible to enumerate all possible MAC models and even 1716 levels of protection within a subset of these models. This means 1717 that the NFSv4 client and servers cannot be expected to directly make 1718 access control decisions based on the security attribute. Instead 1719 NFSv4 should defer permission checking on this attribute to the host 1720 system. These checks are performed in addition to existing DAC and 1721 ACL checks outlined in the NFSv4 protocol. Section 7.6 gives a 1722 specific example of how the security attribute is handled under a 1723 particular MAC model. 1725 7.3.4. Object Creation 1727 When creating files in NFSv4 the OPEN and CREATE operations are used. 1728 One of the parameters to these operations is an fattr4 structure 1729 containing the attributes the file is to be created with. This 1730 allows NFSv4 to atomically set the security attribute of files upon 1731 creation. When a client is MAC aware it must always provide the 1732 initial security attribute upon file creation. In the event that the 1733 server is the only MAC aware entity in the system it should ignore 1734 the security attribute specified by the client and instead make the 1735 determination itself. A more in depth explanation can be found in 1736 Section 7.6. 1738 7.3.5. Existing Objects 1740 Note that under the MAC model, all objects must have labels. 1741 Therefore, if an existing server is upgraded to include LNFS support, 1742 then it is the responsibility of the security system to define the 1743 behavior for existing objects. For example, if the security system 1744 is LFS 0, which means the server just stores and returns labels, then 1745 existing files should return labels which are set to an empty value. 1747 7.3.6. Label Changes 1749 As per the requirements, when a file's security label is modified, 1750 the server must notify all clients which have the file opened of the 1751 change in label. It does so with CB_ATTR_CHANGED. There are 1752 preconditions to making an attribute change imposed by NFSv4 and the 1753 security system might want to impose others. In the process of 1754 meeting these preconditions, the server may chose to either serve the 1755 request in whole or return NFS4ERR_DELAY to the SETATTR operation. 1757 If there are open delegations on the file belonging to client other 1758 than the one making the label change, then the process described in 1759 Section 7.3.2 must be followed. 1761 As the server is always presented with the subject label from the 1762 client, it does not necessarily need to communicate the fact that the 1763 label has changed to the client. In the cases where the change 1764 outright denies the client access, the client will be able to quickly 1765 determine that there is a new label in effect. It is in cases where 1766 the client may share the same object between multiple subjects or a 1767 security system which is not strictly hierarchical that the 1768 CB_ATTR_CHANGED callback is very useful. It allows the server to 1769 inform the clients that the cached security attribute is now stale. 1771 Consider a system in which the clients enforce MAC checks and and the 1772 server has a very simple security system which just stores the 1773 labels. In this system, the MAC label check always allows access, 1774 regardless of the subject label. 1776 The way in which MAC labels are enforced is by the smart client. So 1777 if client A changes a security label on a file, then the server MUST 1778 inform all clients that have the file opened that the label has 1779 changed via CB_ATTR_CHANGED. Then the clients MUST retrieve the new 1780 label and MUST enforce access via the new attribute values. 1782 [[Comment.6: Describe a LFS of 0, which will be the means to indicate 1783 such a deployment. In the current LFR, 0 is marked as reserved. If 1784 we use it, then we define the default LFS to be used by a LNFS aware 1785 server. I.e., it lets smart clients work together in the face of a 1786 dumb server. Note that will supporting this system is optional, it 1787 will make for a very good debugging mode during development. I.e., 1788 even if a server does not deploy with another security system, this 1789 mode gets your foot in the door. --TH]] 1791 7.4. pNFS Considerations 1793 This section examines the issues in deploying LNFS in a pNFS 1794 community of servers. 1796 7.4.1. MAC Label Checks 1798 The new FATTR4_SEC_LABEL attribute is metadata information and as 1799 such the DS is not aware of the value contained on the MDS. 1800 Fortunately, the NFSv4.1 protocol [2] already has provisions for 1801 doing access level checks from the DS to the MDS. In order for the 1802 DS to validate the subject label presented by the client, it SHOULD 1803 utilize this mechanism. 1805 If a file's FATTR4_SEC_LABEL is changed, then the MDS should utilize 1806 CB_ATTR_CHANGED to inform the client of that fact. If the MDS is 1807 maintaining 1809 7.5. Discovery of Server LNFS Support 1811 The server can easily determine that a client supports LNFS when it 1812 queries for the FATTR4_SEC_LABEL label for an object. Note that it 1813 cannot assume that the presence of RPCSEC_GSSv3 indicates LNFS 1814 support. The client might need to discover which LFS the server 1815 supports. 1817 A server which supports LNFS MUST allow a client with any subject 1818 label to retrieve the FATTR4_SEC_LABEL attribute for the root 1819 filehandle, ROOTFH. The following compound must always succeed as 1820 far as a MAC label check is concerned: 1822 PUTROOTFH, GETATTR {FATTR4_SEC_LABEL} 1824 Note that the server might have imposed a security flavor on the root 1825 that precludes such access. I.e., if the server requires kerberized 1826 access and the client presents a compound with AUTH_SYS, then the 1827 server is allowed to return NFS4ERR_WRONGSEC in this case. But if 1828 the client presents a correct security flavor, then the server MUST 1829 return the FATTR4_SEC_LABEL attribute with the supported LFS filled 1830 in. 1832 7.6. MAC Security NFS Modes of Operation 1834 A system using Labeled NFS may operate in three modes. The first 1835 mode provides the most protection and is called "full mode". In this 1836 mode both the client and server implement a MAC model allowing each 1837 end to make an access control decision. The remaining two modes are 1838 variations on each other and are called "smart client" and "smart 1839 server" modes. In these modes one end of the connection is not 1840 implementing a MAC model and because of this these operating modes 1841 offer less protection than full mode. 1843 7.6.1. Full Mode 1845 Full mode environments consist of MAC aware NFSv4 servers and clients 1846 and may be composed of mixed MAC models and policies. The system 1847 requires that both the client and server have an opportunity to 1848 perform an access control check based on all relevant information 1849 within the network. The file object security attribute is provided 1850 using the mechanism described in Section 7.3. The security attribute 1851 of the subject making the request is transported at the RPC layer 1852 using the mechanism described in RPCSECGSSv3 [5]. 1854 7.6.1.1. Initial Labeling and Translation 1856 The ability to create a file is an action that a MAC model may wish 1857 to mediate. The client is given the responsibility to determine the 1858 initial security attribute to be placed on a file. This allows the 1859 client to make a decision as to the acceptable security attributes to 1860 create a file with before sending the request to the server. Once 1861 the server receives the creation request from the client it may 1862 choose to evaluate if the security attribute is acceptable. 1864 Security attributes on the client and server may vary based on MAC 1865 model and policy. To handle this the security attribute field has an 1866 LFS component. This component is a mechanism for the host to 1867 identify the format and meaning of the opaque portion of the security 1868 attribute. A full mode environment may contain hosts operating in 1869 several different LFSs and DOIs. In this case a mechanism for 1870 translating the opaque portion of the security attribute is needed. 1871 The actual translation function will vary based on MAC model and 1872 policy and is out of the scope of this document. If a translation is 1873 unavailable for a given LFS and DOI then the request SHOULD be 1874 denied. Another recourse is to allow the host to provide a fallback 1875 mapping for unknown security attributes. 1877 7.6.1.2. Policy Enforcement 1879 In full mode access control decisions are made by both the clients 1880 and servers. When a client makes a request it takes the security 1881 attribute from the requesting process and makes an access control 1882 decision based on that attribute and the security attribute of the 1883 object it is trying to access. If the client denies that access an 1884 RPC call to the server is never made. If however the access is 1885 allowed the client will make a call to the NFS server. 1887 When the server receives the request from the client it extracts the 1888 security attribute conveyed in the RPC request. The server then uses 1889 this security attribute and the attribute of the object the client is 1890 trying to access to make an access control decision. If the server's 1891 policy allows this access it will fulfill the client's request, 1892 otherwise it will return NFS4ERR_ACCESS. 1894 Implementations MAY validate security attributes supplied over the 1895 network to ensure that they are within a set of attributes permitted 1896 from a specific peer, and if not, reject them. Note that a system 1897 may permit a different set of attributes to be accepted from each 1898 peer. 1900 7.6.2. Smart Client Mode 1902 Smart client environments consist of NFSv4 servers that are not MAC 1903 aware but NFSv4 clients that are. Clients in this environment are 1904 may consist of groups implementing different MAC models policies. 1905 The system requires that all clients in the environment be 1906 responsible for access control checks. Due to the amount of trust 1907 placed in the clients this mode is only to be used in a trusted 1908 environment. 1910 7.6.2.1. Initial Labeling and Translation 1912 Just like in full mode the client is responsible for determining the 1913 initial label upon object creation. The server in smart client mode 1914 does not implement a MAC model, however, it may provide the ability 1915 to restrict the creation and labeling of object with certain labels 1916 based on different criteria as described in Section 7.6.1.2. 1918 In a smart client environment a group of clients operate in a single 1919 DOI. This removes the need for the clients to maintain a set of DOI 1920 translations. Servers should provide a method to allow different 1921 groups of clients to access the server at the same time. However it 1922 should not let two groups of clients operating in different DOIs to 1923 access the same files. 1925 7.6.2.2. Policy Enforcement 1927 In smart client mode access control decisions are made by the 1928 clients. When a client accesses an object it obtains the security 1929 attribute of the object from the server and combines it with the 1930 security attribute of the process making the request to make an 1931 access control decision. This check is in addition to the DAC checks 1932 provided by NFSv4 so this may fail based on the DAC criteria even if 1933 the MAC policy grants access. As the policy check is located on the 1934 client an access control denial should take the form that is native 1935 to the platform. 1937 7.6.3. Smart Server Mode 1939 Smart server environments consist of NFSv4 servers that are MAC aware 1940 and one or more MAC unaware clients. The server is the only entity 1941 enforcing policy, and may selectively provide standard NFS services 1942 to clients based on their authentication credentials and/or 1943 associated network attributes (e.g., IP address, network interface). 1944 The level of trust and access extended to a client in this mode is 1945 configuration-specific. 1947 7.6.3.1. Initial Labeling and Translation 1949 In smart server mode all labeling and access control decisions are 1950 performed by the NFSv4 server. In this environment the NFSv4 clients 1951 are not MAC aware so they cannot provide input into the access 1952 control decision. This requires the server to determine the initial 1953 labeling of objects. Normally the subject to use in this calculation 1954 would originate from the client. Instead the NFSv4 server may choose 1955 to assign the subject security attribute based on their 1956 authentication credentials and/or associated network attributes 1957 (e.g., IP address, network interface). 1959 In smart server mode security attributes are contained solely within 1960 the NFSv4 server. This means that all security attributes used in 1961 the system remain within a single LFS and DOI. Since security 1962 attributes will not cross DOIs or change format there is no need to 1963 provide any translation functionality above that which is needed 1964 internally by the MAC model. 1966 7.6.3.2. Policy Enforcement 1968 All access control decisions in smart server mode are made by the 1969 server. The server will assign the subject a security attribute 1970 based on some criteria (e.g., IP address, network interface). Using 1971 the newly calculated security attribute and the security attribute of 1972 the object being requested the MAC model makes the access control 1973 check and returns NFS4ERR_ACCESS on a denial and NFS4_OK on success. 1974 This check is done transparently to the client so if the MAC 1975 permission check fails the client may be unaware of the reason for 1976 the permission failure. When operating in this mode administrators 1977 attempting to debug permission failures should be aware to check the 1978 MAC policy running on the server in addition to the DAC settings. 1980 7.7. Security Considerations 1982 This entire document deals with security issues. 1984 Depending on the level of protection the MAC system offers there may 1985 be a requirement to tightly bind the security attribute to the data. 1987 When only one of the client or server enforces labels, it is 1988 important to realize that the other side is not enforcing MAC 1989 protections. Alternate methods might be in use to handle the lack of 1990 MAC support and care should be taken to identify and mitigate threats 1991 from possible tampering outside of these methods. 1993 An example of this is that a server that modifies READDIR or LOOKUP 1994 results based on the client's subject label might want to always 1995 construct the same subject label for a client which does not present 1996 one. This will prevent a non-LNFS client from mixing entries in the 1997 directory cache. 1999 8. Sharing change attribute implementation details with NFSv4 clients 2001 8.1. Introduction 2003 Although both the NFSv4 [11] and NFSv4.1 protocol [2], define the 2004 change attribute as being mandatory to implement, there is little in 2005 the way of guidance. The only feature that is mandated by them is 2006 that the value must change whenever the file data or metadata change. 2008 While this allows for a wide range of implementations, it also leaves 2009 the client with a conundrum: how does it determine which is the most 2010 recent value for the change attribute in a case where several RPC 2011 calls have been issued in parallel? In other words if two COMPOUNDs, 2012 both containing WRITE and GETATTR requests for the same file, have 2013 been issued in parallel, how does the client determine which of the 2014 two change attribute values returned in the replies to the GETATTR 2015 requests corresponds to the most recent state of the file? In some 2016 cases, the only recourse may be to send another COMPOUND containing a 2017 third GETATTR that is fully serialised with the first two. 2019 NFSv4.2 avoids this kind of inefficiency by allowing the server to 2020 share details about how the change attribute is expected to evolve, 2021 so that the client may immediately determine which, out of the 2022 several change attribute values returned by the server, is the most 2023 recent. 2025 8.2. Definition of the 'change_attr_type' per-file system attribute 2027 enum change_attr_typeinfo { 2028 NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR = 0, 2029 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER = 1, 2030 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS = 2, 2031 NFS4_CHANGE_TYPE_IS_TIME_METADATA = 3, 2032 NFS4_CHANGE_TYPE_IS_UNDEFINED = 4 2033 }; 2035 +------------------+----+---------------------------+-----+ 2036 | Name | Id | Data Type | Acc | 2037 +------------------+----+---------------------------+-----+ 2038 | change_attr_type | XX | enum change_attr_typeinfo | R | 2039 +------------------+----+---------------------------+-----+ 2041 The solution enables the NFS server to provide additional information 2042 about how it expects the change attribute value to evolve after the 2043 file data or metadata has changed. 'change_attr_type' is defined as a 2044 new recommended attribute, and takes values from enum 2045 change_attr_typeinfo as follows: 2047 NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR: The change attribute value MUST 2048 monotonically increase for every atomic change to the file 2049 attributes, data or directory contents. 2051 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER: The change attribute value MUST 2052 be incremented by one unit for every atomic change to the file 2053 attributes, data or directory contents. This property is 2054 preserved when writing to pNFS data servers. 2056 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS: The change attribute 2057 value MUST be incremented by one unit for every atomic change to 2058 the file attributes, data or directory contents. In the case 2059 where the client is writing to pNFS data servers, the number of 2060 increments is not guaranteed to exactly match the number of 2061 writes. 2063 NFS4_CHANGE_TYPE_IS_TIME_METADATA: The change attribute is 2064 implemented as suggested in the NFSv4 spec [11] in terms of the 2065 time_metadata attribute. 2067 NFS4_CHANGE_TYPE_IS_UNDEFINED: The change attribute does not take 2068 values that fit into any of these categories. 2070 If either NFS4_CHANGE_TYPE_IS_MONOTONIC_INCR, 2071 NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, or 2072 NFS4_CHANGE_TYPE_IS_TIME_METADATA are set, then the client knows at 2073 the very least that the change attribute is monotonically increasing, 2074 which is sufficient to resolve the question of which value is the 2075 most recent. 2077 If the client sees the value NFS4_CHANGE_TYPE_IS_TIME_METADATA, then 2078 by inspecting the value of the 'time_delta' attribute it additionally 2079 has the option of detecting rogue server implementations that use 2080 time_metadata in violation of the spec. 2082 Finally, if the client sees NFS4_CHANGE_TYPE_IS_VERSION_COUNTER, it 2083 has the ability to predict what the resulting change attribute value 2084 should be after a COMPOUND containing a SETATTR, WRITE, or CREATE. 2085 This again allows it to detect changes made in parallel by another 2086 client. The value NFS4_CHANGE_TYPE_IS_VERSION_COUNTER_NOPNFS permits 2087 the same, but only if the client is not doing pNFS WRITEs. 2089 9. Security Considerations 2091 10. File Attributes 2093 10.1. Attribute Definitions 2095 10.1.1. Attribute 77: space_reserved 2097 The space_reserve attribute is a read/write attribute of type 2098 boolean. It is a per file attribute. When the space_reserved 2099 attribute is set via SETATTR, the server must ensure that there is 2100 disk space to accommodate every byte in the file before it can return 2101 success. If the server cannot guarantee this, it must return 2102 NFS4ERR_NOSPC. 2104 If the client tries to grow a file which has the space_reserved 2105 attribute set, the server must guarantee that there is disk space to 2106 accommodate every byte in the file with the new size before it can 2107 return success. If the server cannot guarantee this, it must return 2108 NFS4ERR_NOSPC. 2110 It is not required that the server allocate the space to the file 2111 before returning success. The allocation can be deferred, however, 2112 it must be guaranteed that it will not fail for lack of space. 2114 The value of space_reserved can be obtained at any time through 2115 GETATTR. 2117 In order to avoid ambiguity, the space_reserve bit cannot be set 2118 along with the size bit in SETATTR. Increasing the size of a file 2119 with space_reserve set will fail if space reservation cannot be 2120 guaranteed for the new size. If the file size is decreased, space 2121 reservation is only guaranteed for the new size and the extra blocks 2122 backing the file can be released. 2124 10.1.2. Attribute 78: space_freed 2126 space_freed gives the number of bytes freed if the file is deleted. 2127 This attribute is read only and is of type length4. It is a per file 2128 attribute. 2130 11. Operations: REQUIRED, RECOMMENDED, or OPTIONAL 2132 The following tables summarize the operations of the NFSv4.2 protocol 2133 and the corresponding designation of REQUIRED, RECOMMENDED, and 2134 OPTIONAL to implement or MUST NOT implement. The designation of MUST 2135 NOT implement is reserved for those operations that were defined in 2136 either NFSv4.0 or NFSV4.1 and MUST NOT be implemented in NFSv4.2. 2138 For the most part, the REQUIRED, RECOMMENDED, or OPTIONAL designation 2139 for operations sent by the client is for the server implementation. 2140 The client is generally required to implement the operations needed 2141 for the operating environment for which it serves. For example, a 2142 read-only NFSv4.2 client would have no need to implement the WRITE 2143 operation and is not required to do so. 2145 The REQUIRED or OPTIONAL designation for callback operations sent by 2146 the server is for both the client and server. Generally, the client 2147 has the option of creating the backchannel and sending the operations 2148 on the fore channel that will be a catalyst for the server sending 2149 callback operations. A partial exception is CB_RECALL_SLOT; the only 2150 way the client can avoid supporting this operation is by not creating 2151 a backchannel. 2153 Since this is a summary of the operations and their designation, 2154 there are subtleties that are not presented here. Therefore, if 2155 there is a question of the requirements of implementation, the 2156 operation descriptions themselves must be consulted along with other 2157 relevant explanatory text within this either specification or that of 2158 NFSv4.1 [2].. 2160 The abbreviations used in the second and third columns of the table 2161 are defined as follows. 2163 REQ REQUIRED to implement 2165 REC RECOMMEND to implement 2167 OPT OPTIONAL to implement 2169 MNI MUST NOT implement 2171 For the NFSv4.2 features that are OPTIONAL, the operations that 2172 support those features are OPTIONAL, and the server would return 2173 NFS4ERR_NOTSUPP in response to the client's use of those operations. 2174 If an OPTIONAL feature is supported, it is possible that a set of 2175 operations related to the feature become REQUIRED to implement. The 2176 third column of the table designates the feature(s) and if the 2177 operation is REQUIRED or OPTIONAL in the presence of support for the 2178 feature. 2180 The OPTIONAL features identified and their abbreviations are as 2181 follows: 2183 pNFS Parallel NFS 2185 FDELG File Delegations 2187 DDELG Directory Delegations 2189 COPY Server Side Copy 2191 ADB Application Data Blocks 2193 Operations 2195 +----------------------+--------------------+-----------------------+ 2196 | Operation | REQ, REC, OPT, or | Feature (REQ, REC, or | 2197 | | MNI | OPT) | 2198 +----------------------+--------------------+-----------------------+ 2199 | ACCESS | REQ | | 2200 | BACKCHANNEL_CTL | REQ | | 2201 | BIND_CONN_TO_SESSION | REQ | | 2202 | CLOSE | REQ | | 2203 | COMMIT | REQ | | 2204 | COPY | OPT | COPY (REQ) | 2205 | COPY_ABORT | OPT | COPY (REQ) | 2206 | COPY_NOTIFY | OPT | COPY (REQ) | 2207 | COPY_REVOKE | OPT | COPY (REQ) | 2208 | COPY_STATUS | OPT | COPY (REQ) | 2209 | CREATE | REQ | | 2210 | CREATE_SESSION | REQ | | 2211 | DELEGPURGE | OPT | FDELG (REQ) | 2212 | DELEGRETURN | OPT | FDELG, DDELG, pNFS | 2213 | | | (REQ) | 2214 | DESTROY_CLIENTID | REQ | | 2215 | DESTROY_SESSION | REQ | | 2216 | EXCHANGE_ID | REQ | | 2217 | FREE_STATEID | REQ | | 2218 | GETATTR | REQ | | 2219 | GETDEVICEINFO | OPT | pNFS (REQ) | 2220 | GETDEVICELIST | OPT | pNFS (OPT) | 2221 | GETFH | REQ | | 2222 | INITIALIZE | OPT | ADB (REQ) | 2223 | GET_DIR_DELEGATION | OPT | DDELG (REQ) | 2224 | LAYOUTCOMMIT | OPT | pNFS (REQ) | 2225 | LAYOUTGET | OPT | pNFS (REQ) | 2226 | LAYOUTRETURN | OPT | pNFS (REQ) | 2227 | LINK | OPT | | 2228 | LOCK | REQ | | 2229 | LOCKT | REQ | | 2230 | LOCKU | REQ | | 2231 | LOOKUP | REQ | | 2232 | LOOKUPP | REQ | | 2233 | NVERIFY | REQ | | 2234 | OPEN | REQ | | 2235 | OPENATTR | OPT | | 2236 | OPEN_CONFIRM | MNI | | 2237 | OPEN_DOWNGRADE | REQ | | 2238 | PUTFH | REQ | | 2239 | PUTPUBFH | REQ | | 2240 | PUTROOTFH | REQ | | 2241 | READ | OPT | | 2242 | READDIR | REQ | | 2243 | READLINK | OPT | | 2244 | READ_PLUS | OPT | ADB (REQ) | 2245 | RECLAIM_COMPLETE | REQ | | 2246 | RELEASE_LOCKOWNER | MNI | | 2247 | REMOVE | REQ | | 2248 | RENAME | REQ | | 2249 | RENEW | MNI | | 2250 | RESTOREFH | REQ | | 2251 | SAVEFH | REQ | | 2252 | SECINFO | REQ | | 2253 | SECINFO_NO_NAME | REC | pNFS file layout | 2254 | | | (REQ) | 2255 | SEQUENCE | REQ | | 2256 | SETATTR | REQ | | 2257 | SETCLIENTID | MNI | | 2258 | SETCLIENTID_CONFIRM | MNI | | 2259 | SET_SSV | REQ | | 2260 | TEST_STATEID | REQ | | 2261 | VERIFY | REQ | | 2262 | WANT_DELEGATION | OPT | FDELG (OPT) | 2263 | WRITE | REQ | | 2264 +----------------------+--------------------+-----------------------+ 2266 Callback Operations 2268 +-------------------------+-------------------+---------------------+ 2269 | Operation | REQ, REC, OPT, or | Feature (REQ, REC, | 2270 | | MNI | or OPT) | 2271 +-------------------------+-------------------+---------------------+ 2272 | CB_COPY | OPT | COPY (REQ) | 2273 | CB_GETATTR | OPT | FDELG (REQ) | 2274 | CB_LAYOUTRECALL | OPT | pNFS (REQ) | 2275 | CB_NOTIFY | OPT | DDELG (REQ) | 2276 | CB_NOTIFY_DEVICEID | OPT | pNFS (OPT) | 2277 | CB_NOTIFY_LOCK | OPT | | 2278 | CB_PUSH_DELEG | OPT | FDELG (OPT) | 2279 | CB_RECALL | OPT | FDELG, DDELG, pNFS | 2280 | | | (REQ) | 2281 | CB_RECALL_ANY | OPT | FDELG, DDELG, pNFS | 2282 | | | (REQ) | 2283 | CB_RECALL_SLOT | REQ | | 2284 | CB_RECALLABLE_OBJ_AVAIL | OPT | DDELG, pNFS (REQ) | 2285 | CB_SEQUENCE | OPT | FDELG, DDELG, pNFS | 2286 | | | (REQ) | 2287 | CB_WANTS_CANCELLED | OPT | FDELG, DDELG, pNFS | 2288 | | | (REQ) | 2289 +-------------------------+-------------------+---------------------+ 2291 12. NFSv4.2 Operations 2293 12.1. Operation 59: COPY - Initiate a server-side copy 2294 12.1.1. ARGUMENT 2296 const COPY4_GUARDED = 0x00000001; 2297 const COPY4_METADATA = 0x00000002; 2299 struct COPY4args { 2300 /* SAVED_FH: source file */ 2301 /* CURRENT_FH: destination file or */ 2302 /* directory */ 2303 offset4 ca_src_offset; 2304 offset4 ca_dst_offset; 2305 length4 ca_count; 2306 uint32_t ca_flags; 2307 component4 ca_destination; 2308 netloc4 ca_source_server<>; 2309 }; 2311 12.1.2. RESULT 2313 union COPY4res switch (nfsstat4 cr_status) { 2314 case NFS4_OK: 2315 stateid4 cr_callback_id<1>; 2316 default: 2317 length4 cr_bytes_copied; 2318 }; 2320 12.1.3. DESCRIPTION 2322 The COPY operation is used for both intra-server and inter-server 2323 copies. In both cases, the COPY is always sent from the client to 2324 the destination server of the file copy. The COPY operation requests 2325 that a file be copied from the location specified by the SAVED_FH 2326 value to the location specified by the combination of CURRENT_FH and 2327 ca_destination. 2329 The SAVED_FH must be a regular file. If SAVED_FH is not a regular 2330 file, the operation MUST fail and return NFS4ERR_WRONG_TYPE. 2332 In order to set SAVED_FH to the source file handle, the compound 2333 procedure requesting the COPY will include a sub-sequence of 2334 operations such as 2336 PUTFH source-fh 2337 SAVEFH 2339 If the request is for a server-to-server copy, the source-fh is a 2340 filehandle from the source server and the compound procedure is being 2341 executed on the destination server. In this case, the source-fh is a 2342 foreign filehandle on the server receiving the COPY request. If 2343 either PUTFH or SAVEFH checked the validity of the filehandle, the 2344 operation would likely fail and return NFS4ERR_STALE. 2346 In order to avoid this problem, the minor version incorporating the 2347 COPY operations will need to make a few small changes in the handling 2348 of existing operations. If a server supports the server-to-server 2349 COPY feature, a PUTFH followed by a SAVEFH MUST NOT return 2350 NFS4ERR_STALE for either operation. These restrictions do not pose 2351 substantial difficulties for servers. The CURRENT_FH and SAVED_FH 2352 may be validated in the context of the operation referencing them and 2353 an NFS4ERR_STALE error returned for an invalid file handle at that 2354 point. 2356 The CURRENT_FH and ca_destination together specify the destination of 2357 the copy operation. If ca_destination is of 0 (zero) length, then 2358 CURRENT_FH specifies the target file. In this case, CURRENT_FH MUST 2359 be a regular file and not a directory. If ca_destination is not of 0 2360 (zero) length, the ca_destination argument specifies the file name to 2361 which the data will be copied within the directory identified by 2362 CURRENT_FH. In this case, CURRENT_FH MUST be a directory and not a 2363 regular file. 2365 If the file named by ca_destination does not exist and the operation 2366 completes successfully, the file will be visible in the file system 2367 namespace. If the file does not exist and the operation fails, the 2368 file MAY be visible in the file system namespace depending on when 2369 the failure occurs and on the implementation of the NFS server 2370 receiving the COPY operation. If the ca_destination name cannot be 2371 created in the destination file system (due to file name 2372 restrictions, such as case or length), the operation MUST fail. 2374 The ca_src_offset is the offset within the source file from which the 2375 data will be read, the ca_dst_offset is the offset within the 2376 destination file to which the data will be written, and the ca_count 2377 is the number of bytes that will be copied. An offset of 0 (zero) 2378 specifies the start of the file. A count of 0 (zero) requests that 2379 all bytes from ca_src_offset through EOF be copied to the 2380 destination. If concurrent modifications to the source file overlap 2381 with the source file region being copied, the data copied may include 2382 all, some, or none of the modifications. The client can use standard 2383 NFS operations (e.g., OPEN with OPEN4_SHARE_DENY_WRITE or mandatory 2384 byte range locks) to protect against concurrent modifications if the 2385 client is concerned about this. If the source file's end of file is 2386 being modified in parallel with a copy that specifies a count of 0 2387 (zero) bytes, the amount of data copied is implementation dependent 2388 (clients may guard against this case by specifying a non-zero count 2389 value or preventing modification of the source file as mentioned 2390 above). 2392 If the source offset or the source offset plus count is greater than 2393 or equal to the size of the source file, the operation will fail with 2394 NFS4ERR_INVAL. The destination offset or destination offset plus 2395 count may be greater than the size of the destination file. This 2396 allows for the client to issue parallel copies to implement 2397 operations such as "cat file1 file2 file3 file4 > dest". 2399 If the destination file is created as a result of this command, the 2400 destination file's size will be equal to the number of bytes 2401 successfully copied. If the destination file already existed, the 2402 destination file's size may increase as a result of this operation 2403 (e.g. if ca_dst_offset plus ca_count is greater than the 2404 destination's initial size). 2406 If the ca_source_server list is specified, then this is an inter- 2407 server copy operation and the source file is on a remote server. The 2408 client is expected to have previously issued a successful COPY_NOTIFY 2409 request to the remote source server. The ca_source_server list 2410 SHOULD be the same as the COPY_NOTIFY response's cnr_source_server 2411 list. If the client includes the entries from the COPY_NOTIFY 2412 response's cnr_source_server list in the ca_source_server list, the 2413 source server can indicate a specific copy protocol for the 2414 destination server to use by returning a URL, which specifies both a 2415 protocol service and server name. Server-to-server copy protocol 2416 considerations are described in Section 2.2.3 and Section 2.4.1. 2418 The ca_flags argument allows the copy operation to be customized in 2419 the following ways using the guarded flag (COPY4_GUARDED) and the 2420 metadata flag (COPY4_METADATA). 2422 If the guarded flag is set and the destination exists on the server, 2423 this operation will fail with NFS4ERR_EXIST. 2425 If the guarded flag is not set and the destination exists on the 2426 server, the behavior is implementation dependent. 2428 If the metadata flag is set and the client is requesting a whole file 2429 copy (i.e., ca_count is 0 (zero)), a subset of the destination file's 2430 attributes MUST be the same as the source file's corresponding 2431 attributes and a subset of the destination file's attributes SHOULD 2432 be the same as the source file's corresponding attributes. The 2433 attributes in the MUST and SHOULD copy subsets will be defined for 2434 each NFS version. 2436 For NFSv4.1, Table 1 and Table 2 list the REQUIRED and RECOMMENDED 2437 attributes respectively. A "MUST" in the "Copy to destination file?" 2438 column indicates that the attribute is part of the MUST copy set. A 2439 "SHOULD" in the "Copy to destination file?" column indicates that the 2440 attribute is part of the SHOULD copy set. 2442 +--------------------+----+---------------------------+ 2443 | Name | Id | Copy to destination file? | 2444 +--------------------+----+---------------------------+ 2445 | supported_attrs | 0 | no | 2446 | type | 1 | MUST | 2447 | fh_expire_type | 2 | no | 2448 | change | 3 | SHOULD | 2449 | size | 4 | MUST | 2450 | link_support | 5 | no | 2451 | symlink_support | 6 | no | 2452 | named_attr | 7 | no | 2453 | fsid | 8 | no | 2454 | unique_handles | 9 | no | 2455 | lease_time | 10 | no | 2456 | rdattr_error | 11 | no | 2457 | filehandle | 19 | no | 2458 | suppattr_exclcreat | 75 | no | 2459 +--------------------+----+---------------------------+ 2461 Table 1 2463 +--------------------+----+---------------------------+ 2464 | Name | Id | Copy to destination file? | 2465 +--------------------+----+---------------------------+ 2466 | acl | 12 | MUST | 2467 | aclsupport | 13 | no | 2468 | archive | 14 | no | 2469 | cansettime | 15 | no | 2470 | case_insensitive | 16 | no | 2471 | case_preserving | 17 | no | 2472 | change_policy | 60 | no | 2473 | chown_restricted | 18 | MUST | 2474 | dacl | 58 | MUST | 2475 | dir_notif_delay | 56 | no | 2476 | dirent_notif_delay | 57 | no | 2477 | fileid | 20 | no | 2478 | files_avail | 21 | no | 2479 | files_free | 22 | no | 2480 | files_total | 23 | no | 2481 | fs_charset_cap | 76 | no | 2482 | fs_layout_type | 62 | no | 2483 | fs_locations | 24 | no | 2484 | fs_locations_info | 67 | no | 2485 | fs_status | 61 | no | 2486 | hidden | 25 | MUST | 2487 | homogeneous | 26 | no | 2488 | layout_alignment | 66 | no | 2489 | layout_blksize | 65 | no | 2490 | layout_hint | 63 | no | 2491 | layout_type | 64 | no | 2492 | maxfilesize | 27 | no | 2493 | maxlink | 28 | no | 2494 | maxname | 29 | no | 2495 | maxread | 30 | no | 2496 | maxwrite | 31 | no | 2497 | mdsthreshold | 68 | no | 2498 | mimetype | 32 | MUST | 2499 | mode | 33 | MUST | 2500 | mode_set_masked | 74 | no | 2501 | mounted_on_fileid | 55 | no | 2502 | no_trunc | 34 | no | 2503 | numlinks | 35 | no | 2504 | owner | 36 | MUST | 2505 | owner_group | 37 | MUST | 2506 | quota_avail_hard | 38 | no | 2507 | quota_avail_soft | 39 | no | 2508 | quota_used | 40 | no | 2509 | rawdev | 41 | no | 2510 | retentevt_get | 71 | MUST | 2511 | retentevt_set | 72 | no | 2512 | retention_get | 69 | MUST | 2513 | retention_hold | 73 | MUST | 2514 | retention_set | 70 | no | 2515 | sacl | 59 | MUST | 2516 | space_avail | 42 | no | 2517 | space_free | 43 | no | 2518 | space_freed | 78 | no | 2519 | space_reserved | 77 | MUST | 2520 | space_total | 44 | no | 2521 | space_used | 45 | no | 2522 | system | 46 | MUST | 2523 | time_access | 47 | MUST | 2524 | time_access_set | 48 | no | 2525 | time_backup | 49 | no | 2526 | time_create | 50 | MUST | 2527 | time_delta | 51 | no | 2528 | time_metadata | 52 | SHOULD | 2529 | time_modify | 53 | MUST | 2530 | time_modify_set | 54 | no | 2531 +--------------------+----+---------------------------+ 2532 Table 2 2534 [NOTE: The source file's attribute values will take precedence over 2535 any attribute values inherited by the destination file.] 2537 In the case of an inter-server copy or an intra-server copy between 2538 file systems, the attributes supported for the source file and 2539 destination file could be different. By definition,the REQUIRED 2540 attributes will be supported in all cases. If the metadata flag is 2541 set and the source file has a RECOMMENDED attribute that is not 2542 supported for the destination file, the copy MUST fail with 2543 NFS4ERR_ATTRNOTSUPP. 2545 Any attribute supported by the destination server that is not set on 2546 the source file SHOULD be left unset. 2548 Metadata attributes not exposed via the NFS protocol SHOULD be copied 2549 to the destination file where appropriate. 2551 The destination file's named attributes are not duplicated from the 2552 source file. After the copy process completes, the client MAY 2553 attempt to duplicate named attributes using standard NFSv4 2554 operations. However, the destination file's named attribute 2555 capabilities MAY be different from the source file's named attribute 2556 capabilities. 2558 If the metadata flag is not set and the client is requesting a whole 2559 file copy (i.e., ca_count is 0 (zero)), the destination file's 2560 metadata is implementation dependent. 2562 If the client is requesting a partial file copy (i.e., ca_count is 2563 not 0 (zero)), the client SHOULD NOT set the metadata flag and the 2564 server MUST ignore the metadata flag. 2566 If the operation does not result in an immediate failure, the server 2567 will return NFS4_OK, and the CURRENT_FH will remain the destination's 2568 filehandle. 2570 If an immediate failure does occur, cr_bytes_copied will be set to 2571 the number of bytes copied to the destination file before the error 2572 occurred. The cr_bytes_copied value indicates the number of bytes 2573 copied but not which specific bytes have been copied. 2575 A return of NFS4_OK indicates that either the operation is complete 2576 or the operation was initiated and a callback will be used to deliver 2577 the final status of the operation. 2579 If the cr_callback_id is returned, this indicates that the operation 2580 was initiated and a CB_COPY callback will deliver the final results 2581 of the operation. The cr_callback_id stateid is termed a copy 2582 stateid in this context. The server is given the option of returning 2583 the results in a callback because the data may require a relatively 2584 long period of time to copy. 2586 If no cr_callback_id is returned, the operation completed 2587 synchronously and no callback will be issued by the server. The 2588 completion status of the operation is indicated by cr_status. 2590 If the copy completes successfully, either synchronously or 2591 asynchronously, the data copied from the source file to the 2592 destination file MUST appear identical to the NFS client. However, 2593 the NFS server's on disk representation of the data in the source 2594 file and destination file MAY differ. For example, the NFS server 2595 might encrypt, compress, deduplicate, or otherwise represent the on 2596 disk data in the source and destination file differently. 2598 In the event of a failure the state of the destination file is 2599 implementation dependent. The COPY operation may fail for the 2600 following reasons (this is a partial list). 2602 NFS4ERR_MOVED: The file system which contains the source file, or 2603 the destination file or directory is not present. The client can 2604 determine the correct location and reissue the operation with the 2605 correct location. 2607 NFS4ERR_NOTSUPP: The copy offload operation is not supported by the 2608 NFS server receiving this request. 2610 NFS4ERR_PARTNER_NOTSUPP: The remote server does not support the 2611 server-to-server copy offload protocol. 2613 NFS4ERR_OFFLOAD_DENIED: The copy offload operation is supported by 2614 both the source and the destination, but the destination is not 2615 allowing it for this file. If the client sees this error, it 2616 should fall back to the normal copy semantics. 2618 NFS4ERR_PARTNER_NO_AUTH: The remote server does not authorize a 2619 server-to-server copy offload operation. This may be due to the 2620 client's failure to send the COPY_NOTIFY operation to the remote 2621 server, the remote server receiving a server-to-server copy 2622 offload request after the copy lease time expired, or for some 2623 other permission problem. 2625 NFS4ERR_FBIG: The copy operation would have caused the file to grow 2626 beyond the server's limit. 2628 NFS4ERR_NOTDIR: The CURRENT_FH is a file and ca_destination has non- 2629 zero length. 2631 NFS4ERR_WRONG_TYPE: The SAVED_FH is not a regular file. 2633 NFS4ERR_ISDIR: The CURRENT_FH is a directory and ca_destination has 2634 zero length. 2636 NFS4ERR_INVAL: The source offset or offset plus count are greater 2637 than or equal to the size of the source file. 2639 NFS4ERR_DELAY: The server does not have the resources to perform the 2640 copy operation at the current time. The client should retry the 2641 operation sometime in the future. 2643 NFS4ERR_METADATA_NOTSUPP: The destination file cannot support the 2644 same metadata as the source file. 2646 NFS4ERR_WRONGSEC: The security mechanism being used by the client 2647 does not match the server's security policy. 2649 12.2. Operation 60: COPY_ABORT - Cancel a server-side copy 2651 12.2.1. ARGUMENT 2653 struct COPY_ABORT4args { 2654 /* CURRENT_FH: desination file */ 2655 stateid4 caa_stateid; 2656 }; 2658 12.2.2. RESULT 2660 struct COPY_ABORT4res { 2661 nfsstat4 car_status; 2662 }; 2664 12.2.3. DESCRIPTION 2666 COPY_ABORT is used for both intra- and inter-server asynchronous 2667 copies. The COPY_ABORT operation allows the client to cancel a 2668 server-side copy operation that it initiated. This operation is sent 2669 in a COMPOUND request from the client to the destination server. 2670 This operation may be used to cancel a copy when the application that 2671 requested the copy exits before the operation is completed or for 2672 some other reason. 2674 The request contains the filehandle and copy stateid cookies that act 2675 as the context for the previously initiated copy operation. 2677 The result's car_status field indicates whether the cancel was 2678 successful or not. A value of NFS4_OK indicates that the copy 2679 operation was canceled and no callback will be issued by the server. 2680 A copy operation that is successfully canceled may result in none, 2681 some, or all of the data copied. 2683 If the server supports asynchronous copies, the server is REQUIRED to 2684 support the COPY_ABORT operation. 2686 The COPY_ABORT operation may fail for the following reasons (this is 2687 a partial list): 2689 NFS4ERR_NOTSUPP: The abort operation is not supported by the NFS 2690 server receiving this request. 2692 NFS4ERR_RETRY: The abort failed, but a retry at some time in the 2693 future MAY succeed. 2695 NFS4ERR_COMPLETE_ALREADY: The abort failed, and a callback will 2696 deliver the results of the copy operation. 2698 NFS4ERR_SERVERFAULT: An error occurred on the server that does not 2699 map to a specific error code. 2701 12.3. Operation 61: COPY_NOTIFY - Notify a source server of a future 2702 copy 2704 12.3.1. ARGUMENT 2706 struct COPY_NOTIFY4args { 2707 /* CURRENT_FH: source file */ 2708 netloc4 cna_destination_server; 2709 }; 2711 12.3.2. RESULT 2713 struct COPY_NOTIFY4resok { 2714 nfstime4 cnr_lease_time; 2715 netloc4 cnr_source_server<>; 2716 }; 2718 union COPY_NOTIFY4res switch (nfsstat4 cnr_status) { 2719 case NFS4_OK: 2720 COPY_NOTIFY4resok resok4; 2721 default: 2722 void; 2723 }; 2725 12.3.3. DESCRIPTION 2727 This operation is used for an inter-server copy. A client sends this 2728 operation in a COMPOUND request to the source server to authorize a 2729 destination server identified by cna_destination_server to read the 2730 file specified by CURRENT_FH on behalf of the given user. 2732 The cna_destination_server MUST be specified using the netloc4 2733 network location format. The server is not required to resolve the 2734 cna_destination_server address before completing this operation. 2736 If this operation succeeds, the source server will allow the 2737 cna_destination_server to copy the specified file on behalf of the 2738 given user. If COPY_NOTIFY succeeds, the destination server is 2739 granted permission to read the file as long as both of the following 2740 conditions are met: 2742 o The destination server begins reading the source file before the 2743 cnr_lease_time expires. If the cnr_lease_time expires while the 2744 destination server is still reading the source file, the 2745 destination server is allowed to finish reading the file. 2747 o The client has not issued a COPY_REVOKE for the same combination 2748 of user, filehandle, and destination server. 2750 The cnr_lease_time is chosen by the source server. A cnr_lease_time 2751 of 0 (zero) indicates an infinite lease. To renew the copy lease 2752 time the client should resend the same copy notification request to 2753 the source server. 2755 To avoid the need for synchronized clocks, copy lease times are 2756 granted by the server as a time delta. However, there is a 2757 requirement that the client and server clocks do not drift 2758 excessively over the duration of the lease. There is also the issue 2759 of propagation delay across the network which could easily be several 2760 hundred milliseconds as well as the possibility that requests will be 2761 lost and need to be retransmitted. 2763 To take propagation delay into account, the client should subtract it 2764 from copy lease times (e.g., if the client estimates the one-way 2765 propagation delay as 200 milliseconds, then it can assume that the 2766 lease is already 200 milliseconds old when it gets it). In addition, 2767 it will take another 200 milliseconds to get a response back to the 2768 server. So the client must send a lease renewal or send the copy 2769 offload request to the cna_destination_server at least 400 2770 milliseconds before the copy lease would expire. If the propagation 2771 delay varies over the life of the lease (e.g., the client is on a 2772 mobile host), the client will need to continuously subtract the 2773 increase in propagation delay from the copy lease times. 2775 The server's copy lease period configuration should take into account 2776 the network distance of the clients that will be accessing the 2777 server's resources. It is expected that the lease period will take 2778 into account the network propagation delays and other network delay 2779 factors for the client population. Since the protocol does not allow 2780 for an automatic method to determine an appropriate copy lease 2781 period, the server's administrator may have to tune the copy lease 2782 period. 2784 A successful response will also contain a list of names, addresses, 2785 and URLs called cnr_source_server, on which the source is willing to 2786 accept connections from the destination. These might not be 2787 reachable from the client and might be located on networks to which 2788 the client has no connection. 2790 If the client wishes to perform an inter-server copy, the client MUST 2791 send a COPY_NOTIFY to the source server. Therefore, the source 2792 server MUST support COPY_NOTIFY. 2794 For a copy only involving one server (the source and destination are 2795 on the same server), this operation is unnecessary. 2797 The COPY_NOTIFY operation may fail for the following reasons (this is 2798 a partial list): 2800 NFS4ERR_MOVED: The file system which contains the source file is not 2801 present on the source server. The client can determine the 2802 correct location and reissue the operation with the correct 2803 location. 2805 NFS4ERR_NOTSUPP: The copy offload operation is not supported by the 2806 NFS server receiving this request. 2808 NFS4ERR_WRONGSEC: The security mechanism being used by the client 2809 does not match the server's security policy. 2811 12.4. Operation 62: COPY_REVOKE - Revoke a destination server's copy 2812 privileges 2814 12.4.1. ARGUMENT 2816 struct COPY_REVOKE4args { 2817 /* CURRENT_FH: source file */ 2818 netloc4 cra_destination_server; 2819 }; 2821 12.4.2. RESULT 2823 struct COPY_REVOKE4res { 2824 nfsstat4 crr_status; 2825 }; 2827 12.4.3. DESCRIPTION 2829 This operation is used for an inter-server copy. A client sends this 2830 operation in a COMPOUND request to the source server to revoke the 2831 authorization of a destination server identified by 2832 cra_destination_server from reading the file specified by CURRENT_FH 2833 on behalf of given user. If the cra_destination_server has already 2834 begun copying the file, a successful return from this operation 2835 indicates that further access will be prevented. 2837 The cra_destination_server MUST be specified using the netloc4 2838 network location format. The server is not required to resolve the 2839 cra_destination_server address before completing this operation. 2841 The COPY_REVOKE operation is useful in situations in which the source 2842 server granted a very long or infinite lease on the destination 2843 server's ability to read the source file and all copy operations on 2844 the source file have been completed. 2846 For a copy only involving one server (the source and destination are 2847 on the same server), this operation is unnecessary. 2849 If the server supports COPY_NOTIFY, the server is REQUIRED to support 2850 the COPY_REVOKE operation. 2852 The COPY_REVOKE operation may fail for the following reasons (this is 2853 a partial list): 2855 NFS4ERR_MOVED: The file system which contains the source file is not 2856 present on the source server. The client can determine the 2857 correct location and reissue the operation with the correct 2858 location. 2860 NFS4ERR_NOTSUPP: The copy offload operation is not supported by the 2861 NFS server receiving this request. 2863 12.5. Operation 63: COPY_STATUS - Poll for status of a server-side copy 2865 12.5.1. ARGUMENT 2867 struct COPY_STATUS4args { 2868 /* CURRENT_FH: destination file */ 2869 stateid4 csa_stateid; 2870 }; 2872 12.5.2. RESULT 2874 struct COPY_STATUS4resok { 2875 length4 csr_bytes_copied; 2876 nfsstat4 csr_complete<1>; 2877 }; 2879 union COPY_STATUS4res switch (nfsstat4 csr_status) { 2880 case NFS4_OK: 2881 COPY_STATUS4resok resok4; 2882 default: 2883 void; 2884 }; 2886 12.5.3. DESCRIPTION 2888 COPY_STATUS is used for both intra- and inter-server asynchronous 2889 copies. The COPY_STATUS operation allows the client to poll the 2890 server to determine the status of an asynchronous copy operation. 2891 This operation is sent by the client to the destination server. 2893 If this operation is successful, the number of bytes copied are 2894 returned to the client in the csr_bytes_copied field. The 2895 csr_bytes_copied value indicates the number of bytes copied but not 2896 which specific bytes have been copied. 2898 If the optional csr_complete field is present, the copy has 2899 completed. In this case the status value indicates the result of the 2900 asynchronous copy operation. In all cases, the server will also 2901 deliver the final results of the asynchronous copy in a CB_COPY 2902 operation. 2904 The failure of this operation does not indicate the result of the 2905 asynchronous copy in any way. 2907 If the server supports asynchronous copies, the server is REQUIRED to 2908 support the COPY_STATUS operation. 2910 The COPY_STATUS operation may fail for the following reasons (this is 2911 a partial list): 2913 NFS4ERR_NOTSUPP: The copy status operation is not supported by the 2914 NFS server receiving this request. 2916 NFS4ERR_BAD_STATEID: The stateid is not valid (see Section 2.3.2 2917 below). 2919 NFS4ERR_EXPIRED: The stateid has expired (see Copy Offload Stateid 2920 section below). 2922 12.6. Modification to Operation 42: EXCHANGE_ID - Instantiate Client ID 2924 12.6.1. ARGUMENT 2926 /* new */ 2927 const EXCHGID4_FLAG_SUPP_FENCE_OPS = 0x00000004; 2929 12.6.2. RESULT 2931 Unchanged 2933 12.6.3. MOTIVATION 2935 Enterprise applications require guarantees that an operation has 2936 either aborted or completed. NFSv4.1 provides this guarantee as long 2937 as the session is alive: simply send a SEQUENCE operation on the same 2938 slot with a new sequence number, and the successful return of 2939 SEQUENCE indicates the previous operation has completed. However, if 2940 the session is lost, there is no way to know when any in progress 2941 operations have aborted or completed. In hindsight, the NFSv4.1 2942 specification should have mandated that DESTROY_SESSION abort/ 2943 complete all outstanding operations. 2945 12.6.4. DESCRIPTION 2947 A client SHOULD request the EXCHGID4_FLAG_SUPP_FENCE_OPS capability 2948 when it sends an EXCHANGE_ID operation. The server SHOULD set this 2949 capability in the EXCHANGE_ID reply whether the client requests it or 2950 not. If the client ID is created with this capability then the 2951 following will occur: 2953 o The server will not reply to DESTROY_SESSION until all operations 2954 in progress are completed or aborted. 2956 o The server will not reply to subsequent EXCHANGE_ID invoked on the 2957 same Client Owner with a new verifier until all operations in 2958 progress on the Client ID's session are completed or aborted. 2960 o When DESTROY_CLIENTID is invoked, if there are sessions (both idle 2961 and non-idle), opens, locks, delegations, layouts, and/or wants 2962 (Section 18.49) associated with the client ID are removed. 2963 Pending operations will be completed or aborted before the 2964 sessions, opens, locks, delegations, layouts, and/or wants are 2965 deleted. 2967 o The NFS server SHOULD support client ID trunking, and if it does 2968 and the EXCHGID4_FLAG_SUPP_FENCE_OPS capability is enabled, then a 2969 session ID created on one node of the storage cluster MUST be 2970 destroyable via DESTROY_SESSION. In addition, DESTROY_CLIENTID 2971 and an EXCHANGE_ID with a new verifier affects all sessions 2972 regardless what node the sessions were created on. 2974 12.7. Operation 64: INITIALIZE 2976 This operation can be used to initialize the structure imposed by an 2977 application onto a file and to punch a hole into a file. 2979 The server has no concept of the structure imposed by the 2980 application. It is only when the application writes to a section of 2981 the file does order get imposed. In order to detect corruption even 2982 before the application utilizes the file, the application will want 2983 to initialize a range of ADBs. It uses the INITIALIZE operation to 2984 do so. 2986 12.7.1. ARGUMENT 2988 /* 2989 * We use data_content4 in case we wish to 2990 * extend new types later. Note that we 2991 * are explicitly disallowing data. 2992 */ 2993 union initialize_arg4 switch (data_content4 content) { 2994 case NFS4_CONTENT_APP_BLOCK: 2995 app_data_block4 ia_adb; 2996 case NFS4_CONTENT_HOLE: 2997 data_info4 ia_hole; 2998 default: 2999 void; 3000 }; 3002 struct INITIALIZE4args { 3003 /* CURRENT_FH: file */ 3004 stateid4 ia_stateid; 3005 stable_how4 ia_stable; 3006 initialize_arg4 ia_data<>; 3007 }; 3009 12.7.2. RESULT 3011 struct INITIALIZE4resok { 3012 count4 ir_count; 3013 stable_how4 ir_committed; 3014 verifier4 ir_writeverf; 3015 data_content4 ir_sparse; 3016 }; 3018 union INITIALIZE4res switch (nfsstat4 status) { 3019 case NFS4_OK: 3020 INITIALIZE4resok resok4; 3021 default: 3022 void; 3023 }; 3025 12.7.3. DESCRIPTION 3027 When the client invokes the INITIALIZE operation, it has two desired 3028 results: 3030 1. The structure described by the app_data_block4 be imposed on the 3031 file. 3033 2. The contents described by the app_data_block4 be sparse. 3035 If the server supports the INITIALIZE operation, it still might not 3036 support sparse files. So if it receives the INITIALIZE operation, 3037 then it MUST populate the contents of the file with the initialized 3038 ADBs. In other words, if the server supports INITIALIZE, then it 3039 supports the concept of ADBs. [[Comment.7: Do we want to support an 3040 asynchronous INITIALIZE? Do we have to? --TH]] [[Comment.8: Need to 3041 document union arm error code. --TH]] 3043 If the data was already initialized, There are two interesting 3044 scenarios: 3046 1. The data blocks are allocated. 3048 2. Initializing in the middle of an existing ADB. 3050 If the data blocks were already allocated, then the INITIALIZE is a 3051 hole punch operation. If INITIALIZE supports sparse files, then the 3052 data blocks are to be deallocated. If not, then the data blocks are 3053 to be rewritten in the indicated ADB format. [[Comment.9: Need to 3054 document interaction between space reservation and hole punching? 3055 --TH]] 3057 Since the server has no knowledge of ADBs, it should not report 3058 misaligned creation of ADBs. Even while it can detect them, it 3059 cannot disallow them, as the application might be in the process of 3060 changing the size of the ADBs. Thus the server must be prepared to 3061 handle an INITIALIZE into an existing ADB. 3063 This document does not mandate the manner in which the server stores 3064 ADBs sparsely for a file. It does assume that if ADBs are stored 3065 sparsely, then the server can detect when an INITIALIZE arrives that 3066 will force a new ADB to start inside an existing ADB. For example, 3067 assume that ADBi has a adb_block_size of 4k and that an INITIALIZE 3068 starts 1k inside ADBi. The server should [[Comment.10: Need to flesh 3069 this out. --TH]] 3071 12.7.3.1. Hole punching 3073 Whenever a client wishes to deallocate the blocks backing a 3074 particular region in the file, it calls the INITIALIZE operation with 3075 the current filehandle set to the filehandle of the file in question, 3076 start offset and length in bytes of the region set in hpa_offset and 3077 hpa_count respectively. All further reads to this region MUST return 3078 zeros until overwritten. The filehandle specified must be that of a 3079 regular file. 3081 Situations may arise where ia_hole.hi_offset and/or ia_hole.hi_offset 3082 + ia_hole.hi_length will not be aligned to a boundary that the server 3083 does allocations/ deallocations in. For most filesystems, this is 3084 the block size of the file system. In such a case, the server can 3085 deallocate as many bytes as it can in the region. The blocks that 3086 cannot be deallocated MUST be zeroed. Except for the block 3087 deallocation and maximum hole punching capability, a INITIALIZE 3088 operation is to be treated similar to a write of zeroes. 3090 The server is not required to complete deallocating the blocks 3091 specified in the operation before returning. It is acceptable to 3092 have the deallocation be deferred. In fact, INITIALIZE is merely a 3093 hint; it is valid for a server to return success without ever doing 3094 anything towards deallocating the blocks backing the region 3095 specified. However, any future reads to the region MUST return 3096 zeroes. 3098 If used to hole punch, INITIALIZE will result in the space_used 3099 attribute being decreased by the number of bytes that were 3100 deallocated. The space_freed attribute may or may not decrease, 3101 depending on the support and whether the blocks backing the specified 3102 range were shared or not. The size attribute will remain unchanged. 3104 The INITIALIZE operation MUST NOT change the space reservation 3105 guarantee of the file. While the server can deallocate the blocks 3106 specified by hpa_offset and hpa_count, future writes to this region 3107 MUST NOT fail with NFSERR_NOSPC. 3109 The INITIALIZE operation may fail for the following reasons (this is 3110 a partial list): 3112 NFS4ERR_NOTSUPP The Hole punch operations are not supported by the 3113 NFS server receiving this request. 3115 NFS4ERR_DIR The current filehandle is of type NF4DIR. 3117 NFS4ERR_SYMLINK The current filehandle is of type NF4LNK. 3119 NFS4ERR_WRONG_TYPE The current filehandle does not designate an 3120 ordinary file. 3122 12.8. Operation 67: IO_ADVISE - Application I/O access pattern hints 3124 This section introduces a new operation, named IO_ADVISE, which 3125 allows NFS clients to communicate application I/O access pattern 3126 hints to the NFS server. This new operation will allow hints to be 3127 sent to the server when applications use posix_fadvise, direct I/O, 3128 or at any other point at which the client finds useful. 3130 12.8.1. ARGUMENT 3132 enum IO_ADVISE_type4 { 3133 IO_ADVISE4_NORMAL = 0, 3134 IO_ADVISE4_SEQUENTIAL = 1, 3135 IO_ADVISE4_SEQUENTIAL_BACKWARDS = 2, 3136 IO_ADVISE4_RANDOM = 3, 3137 IO_ADVISE4_WILLNEED = 4, 3138 IO_ADVISE4_WILLNEED_OPPORTUNISTIC = 5, 3139 IO_ADVISE4_DONTNEED = 6, 3140 IO_ADVISE4_NOREUSE = 7, 3141 IO_ADVISE4_READ = 8, 3142 IO_ADVISE4_WRITE = 9 3143 }; 3145 struct IO_ADVISE4args { 3146 /* CURRENT_FH: file */ 3147 stateid4 iar_stateid; 3148 offset4 iar_offset; 3149 length4 iar_count; 3150 bitmap4 iar_hints; 3151 }; 3153 12.8.2. RESULT 3155 struct IO_ADVISE4resok { 3156 bitmap4 ior_hints; 3157 }; 3159 union IO_ADVISE4res switch (nfsstat4 _status) { 3160 case NFS4_OK: 3161 IO_ADVISE4resok resok4; 3162 default: 3163 void; 3164 }; 3166 12.8.3. DESCRIPTION 3168 The IO_ADVISE operation sends an I/O access pattern hint to the 3169 server for the owner of stated for a given byte range specified by 3170 iar_offset and iar_count. The byte range specified by iar_offset and 3171 iar_count need not currently exist in the file, but the iar_hints 3172 will apply to the byte range when it does exist. If iar_count is 0, 3173 all data following iar_offset is specified. The server MAY ignore 3174 the advice. 3176 The following are the possible hints: 3178 IO_ADVISE4_NORMAL Specifies that the application has no advice to 3179 give on its behavior with respect to the specified data. It is 3180 the default characteristic if no advice is given. 3182 IO_ADVISE4_SEQUENTIAL Specifies that the stated holder expects to 3183 access the specified data sequentially from lower offsets to 3184 higher offsets. 3186 IO_ADVISE4_SEQUENTIAL BACKWARDS Specifies that the stated holder 3187 expects to access the specified data sequentially from higher 3188 offsets to lower offsets. 3190 IO_ADVISE4_RANDOM Specifies that the stated holder expects to access 3191 the specified data in a random order. 3193 IO_ADVISE4_WILLNEED Specifies that the stated holder expects to 3194 access the specified data in the near future. 3196 IO_ADVISE4_WILLNEED_OPPORTUNISTIC Specifies that the stated holder 3197 expects to possibly access the data in the near future. This is a 3198 speculative hint, and therefore the server should prefetch data or 3199 indirect blocks only if it can be done at a marginal cost. 3201 IO_ADVISE_DONTNEED Specifies that the stated holder expects that it 3202 will not access the specified data in the near future. 3204 IO_ADVISE_NOREUSE Specifies that the stated holder expects to access 3205 the specified data once and then not reuse it thereafter. 3207 IO_ADVISE4_READ Specifies that the stated holder expects to read the 3208 specified data in the near future. 3210 IO_ADVISE4_WRITE Specifies that the stated holder expects to write 3211 the specified data in the near future. 3213 The server will return success if the operation is properly formed, 3214 otherwise the server will return an error. The server MUST NOT 3215 return an error if it does not recognize or does not support the 3216 requested advice. This is also true even if the client sends 3217 contradictory hints to the server, e.g., IO_ADVISE4_SEQUENTIAL and 3218 IO_ADVISE4_RANDOM in a single IO_ADVISE operation. In this case, the 3219 server MUST return success and a ior_hints value that indicates the 3220 hint it intends to optimize. For contradictory hints, this may mean 3221 simply returning IO_ADVISE4_NORMAL for example. 3223 The ior_hints returned by the server is primarily for debugging 3224 purposes since the server is under no obligation to carry out the 3225 hints that it describes in the ior_hints result. In addition, while 3226 the server may have intended to implement the hints returned in 3227 ior_hints, as time progresses, the server may need to change its 3228 handling of a given file due to several reasons including, but not 3229 limited to, memory pressure, additional IO_ADVISE hints sent by other 3230 clients, and heuristically detected file access patterns. 3232 The server MAY return different advice than what the client 3233 requested. If it does, then this might be due to one of several 3234 conditions, including, but not limited to another client advising of 3235 a different I/O access pattern; a different I/O access pattern from 3236 another client that that the server has heuristically detected; or 3237 the server is not able to support the requested I/O access pattern, 3238 perhaps due to a temporary resource limitation. 3240 Each issuance of the IO_ADVISE operation overrides all previous 3241 issuances of IO_ADVISE for a given byte range. This effectively 3242 follows a strategy of last hint wins for a given stated and byte 3243 range. 3245 Clients should assume that hints included in an IO_ADVISE operation 3246 will be forgotten once the file is closed. 3248 12.8.4. IMPLEMENTATION 3250 The NFS client may choose to issue and IO_ADVISE operation to the 3251 server in several different instances. 3253 The most obvious is in direct response to an applications execution 3254 of posix_fadvise. In this case, IO_ADVISE4_WRITE and IO_ADVISE4_READ 3255 may be set based upon the type of file access specified when the file 3256 was opened. 3258 Another useful point would be when an application indicates it is 3259 using direct I/O. Direct I/O may be specified at file open, in which 3260 case a IO_ADVISE may be included in the same compound as the OPEN 3261 operation with the IO_ADVISE4_NOREUSE flag set. Direct I/O may also 3262 be specified separately, in which case a IO_ADVISE operation can be 3263 sent to the server separately. As above, IO_ADVISE4_WRITE and 3264 IO_ADVISE4_READ may be set based upon the type of file access 3265 specified when the file was opened. 3267 12.8.5. pNFS File Layout Data Type Considerations 3269 The IO_ADVISE considerations for pNFS are very similar to the COMMIT 3270 considerations for pNFS. That is, as with COMMIT, some NFS server 3271 implementations prefer IO_ADVISE be done on the DS, and some prefer 3272 it be done on the MDS. 3274 So for the file's layout type, it is proposed that NFSv4.2 include an 3275 additional hint NFL42_CARE_IO_ADVISE_THRU_MDS which is valid only on 3276 NFSv4.2 or higher. Any file's layout obtained with NFSv4.1 MUST NOT 3277 have NFL42_UFLG_IO_ADVISE_THRU_MDS set. Any file's layout obtained 3278 with NFSv4.2 MAY have NFL42_UFLG_IO_ADVISE_THRU_MDS set. If the 3279 client does not implement IO_ADVISE, then it MUST ignore 3280 NFL42_UFLG_IO_ADVISE_THRU_MDS. 3282 If NFL42_UFLG_IO_ADVISE_THRU_MDS is set, then if the client 3283 implements IO_ADVISE, then if it wants the DS to honor IO_ADVISE, the 3284 client MUST send the operation to the MDS, and the server will 3285 communicate the advice back each DS. If the client sends IO_ADVISE 3286 to the DS, then the server MAY return NFS4ERR_NOTSUPP. 3288 If NFL42_UFLG_IO_ADVISE_THRU_MDS is not set, then this indicates to 3289 client that if wants to inform the server via IO_ADVISE of the 3290 client's intended use of the file, then the client SHOULD send an 3291 IO_ADVISE to each DS. While the client MAY always send IO_ADVISE to 3292 the MDS, if the server has not set NFL42_UFLG_IO_ADVISE_THRU_MDS, the 3293 client should expect that such an IO_ADVISE is futile. Note that a 3294 client SHOULD use the same set of arguments on each IO_ADVISE sent to 3295 a DS for the same open file reference. 3297 The server is not required to support different advice for different 3298 DS's with the same open file reference. 3300 12.8.5.1. Dense and Sparse Packing Considerations 3302 The IO_ADVISE operation MUST use the iar_offset and byte range as 3303 dictated by the presence or absence of NFL4_UFLG_DENSE. 3305 E.g., if NFL4_UFLG_DENSE is present, and a READ or WRITE to the DS 3306 for iar_offset 0 really means iar_offset 10000 in the logical file, 3307 then an IO_ADVISE for iar_offset 0 means iar_offset 10000. 3309 E.g., if NFL4_UFLG_DENSE is absent, then a READ or WRITE to the DS 3310 for iar_offset 0 really means iar_offset 0 in the logical file, then 3311 an IO_ADVISE for iar_offset 0 means iar_offset 0 in the logical file. 3313 E.g., if NFL4_UFLG_DENSE is present, the stripe unit is 1000 bytes 3314 and the stripe count is 10, and the dense DS file is serving 3315 iar_offset 0. A READ or WRITE to the DS for iar_offsets 0, 1000, 3316 2000, and 3000, really mean iar_offsets 10000, 20000, 30000, and 3317 40000 (implying a stripe count of 10 and a stripe unit of 1000), then 3318 an IO_ADVISE sent to the same DS with an iar_offset of 500, and a 3319 iar_count of 3000 means that the IO_ADVISE applies to these byte 3320 ranges of the dense DS file: 3322 - 500 to 999 3323 - 1000 to 1999 3324 - 2000 to 2999 3325 - 3000 to 3499 3327 I.e., the contiguous range 500 to 3499 as specified in IO_ADVISE. 3329 It also applies to these byte ranges of the logical file: 3331 - 10500 to 10999 (500 bytes) 3332 - 20000 to 20999 (1000 bytes) 3333 - 30000 to 30999 (1000 bytes) 3334 - 40000 to 40499 (500 bytes) 3335 (total 3000 bytes) 3337 E.g., if NFL4_UFLG_DENSE is absent, the stripe unit is 250 bytes, the 3338 stripe count is 4, and the sparse DS file is serving iar_offset 0. 3339 Then a READ or WRITE to the DS for iar_offsets 0, 1000, 2000, and 3340 3000, really mean iar_offsets 0, 1000, 2000, and 3000 in the logical 3341 file, keeping in mind that on the DS file,. byte ranges 250 to 999, 3342 1250 to 1999, 2250 to 2999, and 3250 to 3999 are not accessible. 3343 Then an IO_ADVISE sent to the same DS with an iar_offset of 500, and 3344 a iar_count of 3000 means that the IO_ADVISE applies to these byte 3345 ranges of the logical file and the sparse DS file: 3347 - 500 to 999 (500 bytes) - no effect 3348 - 1000 to 1249 (250 bytes) - effective 3349 - 1250 to 1999 (750 bytes) - no effect 3350 - 2000 to 2249 (250 bytes) - effective 3351 - 2250 to 2999 (750 bytes) - no effect 3352 - 3000 to 3249 (250 bytes) - effective 3353 - 3250 to 3499 (250 bytes) - no effect 3354 (subtotal 2250 bytes) - no effect 3355 (subtotal 750 bytes) - effective 3356 (grand total 3000 bytes) - no effect + effective 3358 If neither of the flags NFL42_UFLG_IO_ADVISE_THRU_MDS and 3359 NFL4_UFLG_DENSE are set in the layout, then any IO_ADVISE request 3360 sent to the data server with a byte range that overlaps stripe unit 3361 that the data server does not serve MUST NOT result in the status 3362 NFS4ERR_PNFS_IO_HOLE. Instead, the response SHOULD be successful and 3363 if the server applies IO_ADVISE hints on any stripe units that 3364 overlap with the specified range, those hints SHOULD be indicated in 3365 the response. 3367 12.8.6. Number of Supported File Segments 3369 In theory IO_ADVISE allows a client and server to support multiple 3370 file segments, meaning that different, possibly overlapping, byte 3371 ranges of the same open file reference will support different hints. 3372 This is not practical, and in general the server will support just 3373 one set of hints, and these will apply to the entire file. However, 3374 there are some hints that very ephemeral, and are essentially amount 3375 to one time instructions to the NFS server, which will be forgotten 3376 momentarily after IO_ADVISE is executed. 3378 The following hints will always apply to the entire file, regardless 3379 of the specified byte range: 3381 o IO_ADVISE4_NORMAL 3383 o IO_ADVISE4_SEQUENTIAL 3385 o IO_ADVISE4_SEQUENTIAL_BACKWARDS 3387 o IO_ADVISE4_RANDOM 3389 The following hints will always apply to specified byte range, and 3390 will treated as one time instructions: 3392 o IO_ADVISE4_WILLNEED 3394 o IO_ADVISE4_WILLNEED_OPPORTUNISTIC 3396 o IO_ADVISE4_DONTNEED 3398 o IO_ADVISE4_NOREUSE 3400 The following hints are modifiers to all other hints, and will apply 3401 to the entire file and/or to a one time instruction on the specified 3402 byte range: 3404 o IO_ADVISE4_READ 3406 o IO_ADVISE4_WRITE 3408 12.8.7. Possible Additional Hint - IO_ADVISE4_RECENTLY_USED 3410 IO_ADVISE4_RECENTLY_USED The client has recently accessed the byte 3411 range in its own cache. This informs the server that the data in 3412 the byte range remains important to the client. When the server 3413 reaches resource exhaustion, knowing which data is more important 3414 allows the server to make better choices about which data to, for 3415 example purge from a cache, or move to secondary storage. It also 3416 informs the server which delegations are more important, since if 3417 delegations are working correctly, once delegated to a client, a 3418 server might never receive another I/O request for the file. 3420 A use case for this hint is that of the NFS client or application 3421 restart. In the event of restart, the app's/client's cache will be 3422 cold and it will need to fill it from the server. If the server is 3423 maintaining a list (LRU most likely) of byte ranges tagged with 3424 IO_ADVISE4_RECENTLY_USED, then the server could have stored the data 3425 in these ranges into a storage medium that is less expensive than 3426 DRAM, and faster than random access magnetic or optical media, such 3427 as flash. This allows the end to end application to storage system 3428 to co-operate to meet a service level agreement/objective contracted 3429 to the end user by the IT provider. 3431 On the other side, this is effectively a hint regarding multi-level 3432 caching, and it may be more useful to specify a more formal multi- 3433 level caching system. In addition, the action to be taken by the 3434 server file system with this hint, and hence its usefulness, is 3435 unclear. For example, as most clients already cache data that they 3436 know is important, having this data cached twice may be unnecessary. 3437 In fact, substantial performance improvements have been demonstrated 3438 by making caches more exclusive between each other [25], not the 3439 other way around. This means that there is a strong argument to be 3440 made that servers should immediately purge the described cached data 3441 upon receiving this hint. Other work showed that even infinite sized 3442 secondary caches can be largely ineffective [26], but this of course 3443 is subject to the workload. 3445 12.9. Changes to Operation 51: LAYOUTRETURN 3447 12.9.1. Introduction 3449 In the pNFS description provided in [2], the client is not enabled to 3450 relay an error code from the DS to the MDS. In the specification of 3451 the Objects-Based Layout protocol [8], use is made of the opaque 3452 lrf_body field of the LAYOUTRETURN argument to do such a relaying of 3453 error codes. In this section, we define a new data structure to 3454 enable the passing of error codes back to the MDS and provide some 3455 guidelines on what both the client and MDS should expect in such 3456 circumstances. 3458 There are two broad classes of errors, transient and persistent. The 3459 client SHOULD strive to only use this new mechanism to report 3460 persistent errors. It MUST be able to deal with transient issues by 3461 itself. Also, while the client might consider an issue to be 3462 persistent, it MUST be prepared for the MDS to consider such issues 3463 to be persistent. A prime example of this is if the MDS fences off a 3464 client from either a stateid or a filehandle. The client will get an 3465 error from the DS and might relay either NFS4ERR_ACCESS or 3466 NFS4ERR_STALE_STATEID back to the MDS, with the belief that this is a 3467 hard error. The MDS on the other hand, is waiting for the client to 3468 report such an error. For it, the mission is accomplished in that 3469 the client has returned a layout that the MDS had most likley 3470 recalled. 3472 The existing LAYOUTRETURN operation is extended by introducing a new 3473 data structure to report errors, layoutreturn_device_error4. Also, 3474 layoutreturn_device_error4 is introduced to enable an array of errors 3475 to be reported. 3477 12.9.2. ARGUMENT 3479 The ARGUMENT specification of the LAYOUTRETURN operation in section 3480 18.44.1 of [2] is augmented by the following XDR code [24]: 3482 struct layoutreturn_device_error4 { 3483 deviceid4 lrde_deviceid; 3484 nfsstat4 lrde_status; 3485 nfs_opnum4 lrde_opnum; 3486 }; 3488 struct layoutreturn_error_report4 { 3489 layoutreturn_device_error4 lrer_errors<>; 3490 }; 3492 12.9.3. RESULT 3494 The RESULT of the LAYOUTRETURN operation is unchanged; see section 3495 18.44.2 of [2]. 3497 12.9.4. DESCRIPTION 3499 The following text is added to the end of the LAYOUTRETURN operation 3500 DESCRIPTION in section 18.44.3 of [2]. 3502 When a client used LAYOUTRETURN with a type of LAYOUTRETURN4_FILE, 3503 then if the lrf_body field is NULL, it indicates to the MDS that the 3504 client experienced no errors. If lrf_body is non-NULL, then the 3505 field references error information which is layout type specific. 3506 I.e., the Objects-Based Layout protocol can continue to utilize 3507 lrf_body as specified in [8]. For both Files-Based Layouts, the 3508 field references a layoutreturn_device_error4, which contains an 3509 array of layoutreturn_device_error4. 3511 Each individual layoutreturn_device_error4 descibes a single error 3512 associated with a DS, which is identfied via lrde_deviceid. The 3513 operation which returned the error is identified via lrde_opnum. 3514 Finally the NFS error value (nfsstat4) encountered is provided via 3515 lrde_status and may consist of the following error codes: 3517 NFS4_OKAY: No issues were found for this device. 3519 NFS4ERR_NXIO: The client was unable to establish any communication 3520 with the DS. 3522 NFS4ERR_*: The client was able to establish communication with the 3523 DS and is returning one of the allowed error codes for the 3524 operation denoted by lrde_opnum. 3526 12.9.5. IMPLEMENTATION 3528 The following text is added to the end of the LAYOUTRETURN operation 3529 IMPLEMENTATION in section 18.4.4 of [2]. 3531 A client that expects to use pNFS for a mounted filesystem SHOULD 3532 check for pNFS support at mount time. This check SHOULD be performed 3533 by sending a GETDEVICELIST operation, followed by layout-type- 3534 specific checks for accessibility of each storage device returned by 3535 GETDEVICELIST. If the NFS server does not support pNFS, the 3536 GETDEVICELIST operation will be rejected with an NFS4ERR_NOTSUPP 3537 error; in this situation it is up to the client to determine whether 3538 it is acceptable to proceed with NFS-only access. 3540 Clients are expected to tolerate transient storage device errors, and 3541 hence clients SHOULD NOT use the LAYOUTRETURN error handling for 3542 device access problems that may be transient. The methods by which a 3543 client decides whether an access problem is transient vs. persistent 3544 are implementation-specific, but may include retrying I/Os to a data 3545 server under appropriate conditions. 3547 When an I/O fails to a storage device, the client SHOULD retry the 3548 failed I/O via the MDS. In this situation, before retrying the I/O, 3549 the client SHOULD return the layout, or the affected portion thereof, 3550 and SHOULD indicate which storage device or devices was problematic. 3551 If the client does not do this, the MDS may issue a layout recall 3552 callback in order to perform the retried I/O. 3554 The client needs to be cognizant that since this error handling is 3555 optional in the MDS, the MDS may silently ignore this functionality. 3556 Also, as the MDS may consider some issues the client reports to be 3557 expected (see Section 12.9.1), the client might find it difficult to 3558 detect a MDS which has not implemented error handling via 3559 LAYOUTRETURN. 3561 If an MDS is aware that a storage device is proving problematic to a 3562 client, the MDS SHOULD NOT include that storage device in any pNFS 3563 layouts sent to that client. If the MDS is aware that a storage 3564 device is affecting many clients, then the MDS SHOULD NOT include 3565 that storage device in any pNFS layouts sent out. Clients must still 3566 be aware that the MDS might not have any choice in using the storage 3567 device, i.e., there might only be one possible layout for the system. 3569 Another interesting complication is that for existing files, the MDS 3570 might have no choice in which storage devices to hand out to clients. 3571 The MDS might try to restripe a file across a different storage 3572 device, but clients need to be aware that not all implementations 3573 have restriping support. 3575 An MDS SHOULD react to a client return of layouts with errors by not 3576 using the problematic storage devices in layouts for that client, but 3577 the MDS is not required to indefinitely retain per-client storage 3578 device error information. An MDS is also not required to 3579 automatically reinstate use of a previously problematic storage 3580 device; administrative intervention may be required instead. 3582 A client MAY perform I/O via the MDS even when the client holds a 3583 layout that covers the I/O; servers MUST support this client 3584 behavior, and MAY recall layouts as needed to complete I/Os. 3586 12.10. Operation 65: READ_PLUS 3588 READ_PLUS is a new read operation which allows NFS clients to avoid 3589 reading holes in a sparse file and to efficiently transfer ADBs. 3590 READ_PLUS is guaranteed to perform no worse than READ, and can 3591 dramatically improve performance with sparse files. 3593 READ_PLUS supports all the features of the existing NFSv4.1 READ 3594 operation [2] and adds a simple yet significant extension to the 3595 format of its response. The change allows the client to avoid 3596 returning data for portions of the file which are either initialized 3597 and contain no backing store or if the result would appear to be so. 3598 I.e., if the result was a data block composed entirely of zeros, then 3599 it is easier to return a hole. Returning data blocks of unitialized 3600 data wastes computational and network resources, thus reducing 3601 performance. READ_PLUS uses a new result structure that tells the 3602 client that the result is all zeroes AND the byte-range of the hole 3603 in which the request was made. 3605 If the client sends a READ operation, it is explicitly stating that 3606 it is neither supporting sparse files or ADBs. So if a READ occurs 3607 on a sparse ADB or file, then the server must expand such data to be 3608 raw bytes. If a READ occurs in the middle of a hole or ADB, the 3609 server can only send back bytes starting from that offset. 3611 Such an operation is inefficient for transfer of sparse sections of 3612 the file. As such, READ is marked as OBSOLETE in NFSv4.2. Instead, 3613 a client should issue READ_PLUS. Note that as the client has no a 3614 priori knowledge of whether an ADB is present or not, it should 3615 always use READ_PLUS. 3617 12.10.1. ARGUMENT 3619 struct READ_PLUS4args { 3620 /* CURRENT_FH: file */ 3621 stateid4 rpa_stateid; 3622 offset4 rpa_offset; 3623 count4 rpa_count; 3624 }; 3626 12.10.2. RESULT 3628 union read_plus_content switch (data_content4 content) { 3629 case NFS4_CONTENT_DATA: 3630 opaque rpc_data<>; 3631 case NFS4_CONTENT_APP_BLOCK: 3632 app_data_block4 rpc_block; 3633 case NFS4_CONTENT_HOLE: 3634 data_info4 rpc_hole; 3635 default: 3636 void; 3637 }; 3639 /* 3640 * Allow a return of an array of contents. 3641 */ 3642 struct read_plus_res4 { 3643 bool rpr_eof; 3644 read_plus_content rpr_contents<>; 3645 }; 3647 union READ_PLUS4res switch (nfsstat4 status) { 3648 case NFS4_OK: 3649 read_plus_res4 resok4; 3650 default: 3651 void; 3652 }; 3654 12.10.3. DESCRIPTION 3656 The READ_PLUS operation is based upon the NFSv4.1 READ operation [2], 3657 and similarly reads data from the regular file identified by the 3658 current filehandle. 3660 The client provides a rpa_offset of where the READ_PLUS is to start 3661 and a rpa_count of how many bytes are to be read. A rpa_offset of 3662 zero means to read data starting at the beginning of the file. If 3663 rpa_offset is greater than or equal to the size of the file, the 3664 status NFS4_OK is returned with di_length (the data length) set to 3665 zero and eof set to TRUE. READ_PLUS is subject to access permissions 3666 checking. 3668 The READ_PLUS result is comprised of an array of rpr_contents, each 3669 of which describe a data_content4 type of data. For NFSv4.2, the 3670 allowed values are data, ADB, and hole. A server is required to 3671 support the data type, but not ADB nor hole. Both an ADB and a hole 3672 must be returned in its entirety - clients must be prepared to get 3673 more information than they requested. 3675 If the data to be returned is comprised entirely of zeros, then the 3676 server may elect to return that data as a hole. The server 3677 differentiates this to the client by setting di_allocated to TRUE in 3678 this case. Note that in such a scenario, the server is not required 3679 to determine the full extent of the "hole" - it does not need to 3680 determine where the zeros start and end. 3682 The server may elect to return adjacent elements of the same type. 3683 For example, the guard pattern or block size of an ADB might change, 3684 which would require adjacent elements of type ADB. Likewise if the 3685 server has a range of data comprised entirely of zeros and then a 3686 hole, it might want to return two adjacent holes to the client. 3688 If the client specifies a rpa_count value of zero, the READ_PLUS 3689 succeeds and returns zero bytes of data, again subject to access 3690 permissions checking. In all situations, the server may choose to 3691 return fewer bytes than specified by the client. The client needs to 3692 check for this condition and handle the condition appropriately. 3694 If the client specifies an rpa_offset and rpa_count value that is 3695 entirely contained within a hole of the file, then the di_offset and 3696 di_length returned must be for the entire hole. This result is 3697 considered valid until the file is changed (detected via the change 3698 attribute). The server MUST provide the same semantics for the hole 3699 as if the client read the region and received zeroes; the implied 3700 holes contents lifetime MUST be exactly the same as any other read 3701 data. 3703 If the client specifies an rpa_offset and rpa_count value that begins 3704 in a non-hole of the file but extends into hole the server should 3705 return an array comprised of both data and a hole. The client MUST 3706 be prepared for the server to reurn a short read describing just the 3707 data. The client will then issue another READ_PLUS for the remaining 3708 bytes, which the server will respond with information about the hole 3709 in the file. 3711 Except when special stateids are used, the stateid value for a 3712 READ_PLUS request represents a value returned from a previous byte- 3713 range lock or share reservation request or the stateid associated 3714 with a delegation. The stateid identifies the associated owners if 3715 any and is used by the server to verify that the associated locks are 3716 still valid (e.g., have not been revoked). 3718 If the read ended at the end-of-file (formally, in a correctly formed 3719 READ_PLUS operation, if rpa_offset + rpa_count is equal to the size 3720 of the file), or the READ_PLUS operation extends beyond the size of 3721 the file (if rpa_offset + rpa_count is greater than the size of the 3722 file), eof is returned as TRUE; otherwise, it is FALSE. A successful 3723 READ_PLUS of an empty file will always return eof as TRUE. 3725 If the current filehandle is not an ordinary file, an error will be 3726 returned to the client. In the case that the current filehandle 3727 represents an object of type NF4DIR, NFS4ERR_ISDIR is returned. If 3728 the current filehandle designates a symbolic link, NFS4ERR_SYMLINK is 3729 returned. In all other cases, NFS4ERR_WRONG_TYPE is returned. 3731 For a READ_PLUS with a stateid value of all bits equal to zero, the 3732 server MAY allow the READ_PLUS to be serviced subject to mandatory 3733 byte-range locks or the current share deny modes for the file. For a 3734 READ_PLUS with a stateid value of all bits equal to one, the server 3735 MAY allow READ_PLUS operations to bypass locking checks at the 3736 server. 3738 On success, the current filehandle retains its value. 3740 12.10.4. IMPLEMENTATION 3742 If the server returns a short read, then the client should send 3743 another READ_PLUS to get the remaining data. A server may return 3744 less data than requested under several circumstances. The file may 3745 have been truncated by another client or perhaps on the server 3746 itself, changing the file size from what the requesting client 3747 believes to be the case. This would reduce the actual amount of data 3748 available to the client. It is possible that the server reduced the 3749 transfer size and so return a short read result. Server resource 3750 exhaustion may also occur in a short read. 3752 If mandatory byte-range locking is in effect for the file, and if the 3753 byte-range corresponding to the data to be read from the file is 3754 WRITE_LT locked by an owner not associated with the stateid, the 3755 server will return the NFS4ERR_LOCKED error. The client should try 3756 to get the appropriate READ_LT via the LOCK operation before re- 3757 attempting the READ_PLUS. When the READ_PLUS completes, the client 3758 should release the byte-range lock via LOCKU. In addition, the 3759 server MUST return an array of rpr_contents with values of that are 3760 within the owner's locked byte range. 3762 If another client has an OPEN_DELEGATE_WRITE delegation for the file 3763 being read, the delegation must be recalled, and the operation cannot 3764 proceed until that delegation is returned or revoked. Except where 3765 this happens very quickly, one or more NFS4ERR_DELAY errors will be 3766 returned to requests made while the delegation remains outstanding. 3767 Normally, delegations will not be recalled as a result of a READ_PLUS 3768 operation since the recall will occur as a result of an earlier OPEN. 3769 However, since it is possible for a READ_PLUS to be done with a 3770 special stateid, the server needs to check for this case even though 3771 the client should have done an OPEN previously. 3773 12.10.4.1. Additional pNFS Implementation Information 3775 [[Comment.11: We need to go over this section. --TH]] With pNFS, the 3776 semantics of using READ_PLUS remains the same. Any data server MAY 3777 return a READ_HOLE result for a READ_PLUS request that it receives. 3779 When a data server chooses to return a READ_HOLE result, it has the 3780 option of returning hole information for the data stored on that data 3781 server (as defined by the data layout), but it MUST not return a 3782 nfs_readplusreshole structure with a byte range that includes data 3783 managed by another data server. 3785 1. Data servers that cannot determine hole information SHOULD return 3786 HOLE_NOINFO. 3788 2. Data servers that can obtain hole information for the parts of 3789 the file stored on that data server, the data server SHOULD 3790 return HOLE_INFO and the byte range of the hole stored on that 3791 data server. 3793 A data server should do its best to return as much information about 3794 a hole as is feasible without having to contact the metadata server. 3795 If communication with the metadata server is required, then every 3796 attempt should be taken to minimize the number of requests. 3798 If mandatory locking is enforced, then the data server must also 3799 ensure that to return only information for a Hole that is within the 3800 owner's locked byte range. 3802 12.10.5. READ_PLUS with Sparse Files Example 3804 The following table describes a sparse file. For each byte range, 3805 the file contains either non-zero data or a hole. In addition, the 3806 server in this example uses a Hole Threshold of 32K. 3808 +-------------+----------+ 3809 | Byte-Range | Contents | 3810 +-------------+----------+ 3811 | 0-15999 | Hole | 3812 | 16K-31999 | Non-Zero | 3813 | 32K-255999 | Hole | 3814 | 256K-287999 | Non-Zero | 3815 | 288K-353999 | Hole | 3816 | 354K-417999 | Non-Zero | 3817 +-------------+----------+ 3819 Table 3 3821 Under the given circumstances, if a client was to read the file from 3822 beginning to end with a max read size of 64K, the following will be 3823 the result. This assumes the client has already opened the file, 3824 acquired a valid stateid ('s' in the example), and just needs to 3825 issue READ_PLUS requests. [[Comment.12: Change the results to match 3826 array results. --TH]] 3828 1. READ_PLUS(s, 0, 64K) --> NFS_OK, eof = false, data<>[32K]. 3829 Return a short read, as the last half of the request was all 3830 zeroes. Note that the first hole is read back as all zeros as it 3831 is below the Hole Threshhold. 3833 2. READ_PLUS(s, 32K, 64K) --> NFS_OK, readplusrestype4 = READ_HOLE, 3834 nfs_readplusreshole(HOLE_INFO)(32K, 224K). The requested range 3835 was all zeros, and the current hole begins at offset 32K and is 3836 224K in length. 3838 3. READ_PLUS(s, 256K, 64K) --> NFS_OK, readplusrestype4 = READ_OK, 3839 eof = false, data<>[32K]. Return a short read, as the last half 3840 of the request was all zeroes. 3842 4. READ_PLUS(s, 288K, 64K) --> NFS_OK, readplusrestype4 = READ_HOLE, 3843 nfs_readplusreshole(HOLE_INFO)(288K, 66K). 3845 5. READ_PLUS(s, 354K, 64K) --> NFS_OK, readplusrestype4 = READ_OK, 3846 eof = true, data<>[64K]. 3848 12.11. Operation 66: SEEK 3850 SEEK is an operation that allows a client to determine the location 3851 of the next data_content4 in a file. 3853 12.11.1. ARGUMENT 3855 struct SEEK4args { 3856 /* CURRENT_FH: file */ 3857 stateid4 sa_stateid; 3858 offset4 sa_offset; 3859 data_content4 sa_what; 3860 }; 3862 12.11.2. RESULT 3864 union seek_content switch (data_content4 content) { 3865 case NFS4_CONTENT_DATA: 3866 data_info4 sc_data; 3867 case NFS4_CONTENT_APP_BLOCK: 3868 app_data_block4 sc_block; 3869 case NFS4_CONTENT_HOLE: 3870 data_info4 sc_hole; 3871 default: 3872 void; 3873 }; 3875 struct seek_res4 { 3876 bool sr_eof; 3877 seek_content sr_contents; 3878 }; 3880 union SEEK4res switch (nfsstat4 status) { 3881 case NFS4_OK: 3882 seek_res4 resok4; 3883 default: 3884 void; 3885 }; 3887 12.11.3. DESCRIPTION 3889 From the given sa_offset, find the next data_content4 of type sa_what 3890 in the file. For either a hole or ADB, this must return the 3891 data_content4 in its entirety. For data, it must not return the 3892 actual data. 3894 SEEK must follow the same rules for stateids as READ_PLUS 3895 (Section 12.10.3). 3897 If the server could not find a corresponding sa_what, then the status 3898 would still be NFS4_OK, but sr_eof would be TRUE. The sr_contents 3899 would contain a zero-ed out content of the appropriate type. 3901 13. NFSv4.2 Callback Operations 3903 13.1. Procedure 16: CB_ATTR_CHANGED - Notify Client that the File's 3904 Attributes Changed 3906 13.1.1. ARGUMENTS 3908 struct CB_ATTR_CHANGED4args { 3909 nfs_fh4 acca_fh; 3910 bitmap4 acca_critical; 3911 bitmap4 acca_info; 3912 }; 3914 13.1.2. RESULTS 3916 struct CB_ATTR_CHANGED4res { 3917 nfsstat4 accr_status; 3918 }; 3920 13.1.3. DESCRIPTION 3922 The CB_ATTR_CHANGED callback operation is used by the server to 3923 indicate to the client that the file's attributes have been modified 3924 on the server. The server does not convey how the attributes have 3925 changed, just that they have been modified. The server can inform 3926 the client about both critical and informational attribute changes in 3927 the bitmask arguments. The client SHOULD query the server about all 3928 attributes set in acca_critical. For all changes reflected in 3929 acca_info, the client can decide whether or not it wants to poll the 3930 server. 3932 The CB_ATTR_CHANGED callback operation with the FATTR4_SEC_LABEL set 3933 in acca_critical is the method used by the server to indicate that 3934 the MAC label for the file referenced by acca_fh has changed. In 3935 many ways, the server does not care about the result returned by the 3936 client. 3938 13.2. Operation 15: CB_COPY - Report results of a server-side copy 3939 13.2.1. ARGUMENT 3941 union copy_info4 switch (nfsstat4 cca_status) { 3942 case NFS4_OK: 3943 void; 3944 default: 3945 length4 cca_bytes_copied; 3946 }; 3948 struct CB_COPY4args { 3949 nfs_fh4 cca_fh; 3950 stateid4 cca_stateid; 3951 copy_info4 cca_copy_info; 3952 }; 3954 13.2.2. RESULT 3956 struct CB_COPY4res { 3957 nfsstat4 ccr_status; 3958 }; 3960 13.2.3. DESCRIPTION 3962 CB_COPY is used for both intra- and inter-server asynchronous copies. 3963 The CB_COPY callback informs the client of the result of an 3964 asynchronous server-side copy. This operation is sent by the 3965 destination server to the client in a CB_COMPOUND request. The copy 3966 is identified by the filehandle and stateid arguments. The result is 3967 indicated by the status field. If the copy failed, cca_bytes_copied 3968 contains the number of bytes copied before the failure occurred. The 3969 cca_bytes_copied value indicates the number of bytes copied but not 3970 which specific bytes have been copied. 3972 In the absence of an established backchannel, the server cannot 3973 signal the completion of the COPY via a CB_COPY callback. The loss 3974 of a callback channel would be indicated by the server setting the 3975 SEQ4_STATUS_CB_PATH_DOWN flag in the sr_status_flags field of the 3976 SEQUENCE operation. The client must re-establish the callback 3977 channel to receive the status of the COPY operation. Prolonged loss 3978 of the callback channel could result in the server dropping the COPY 3979 operation state and invalidating the copy stateid. 3981 If the client supports the COPY operation, the client is REQUIRED to 3982 support the CB_COPY operation. 3984 The CB_COPY operation may fail for the following reasons (this is a 3985 partial list): 3987 NFS4ERR_NOTSUPP: The copy offload operation is not supported by the 3988 NFS client receiving this request. 3990 14. IANA Considerations 3992 This section uses terms that are defined in [27]. 3994 15. References 3996 15.1. Normative References 3998 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 3999 Levels", March 1997. 4001 [2] Shepler, S., Eisler, M., and D. Noveck, "Network File System 4002 (NFS) Version 4 Minor Version 1 Protocol", RFC 5661, 4003 January 2010. 4005 [3] Haynes, T., "Network File System (NFS) Version 4 Minor Version 4006 2 External Data Representation Standard (XDR) Description", 4007 March 2011. 4009 [4] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 4010 Resource Identifier (URI): Generic Syntax", STD 66, RFC 3986, 4011 January 2005. 4013 [5] Haynes, T. and N. Williams, "Remote Procedure Call (RPC) 4014 Security Version 3", draft-williams-rpcsecgssv3 (work in 4015 progress), 2011. 4017 [6] The Open Group, "Section 'posix_fadvise()' of System Interfaces 4018 of The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 4019 2004 Edition", 2004. 4021 [7] Eisler, M., Chiu, A., and L. Ling, "RPCSEC_GSS Protocol 4022 Specification", RFC 2203, September 1997. 4024 [8] Halevy, B., Welch, B., and J. Zelenka, "Object-Based Parallel 4025 NFS (pNFS) Operations", RFC 5664, January 2010. 4027 [9] Shepler, S., Eisler, M., and D. Noveck, "Network File System 4028 (NFS) Version 4 Minor Version 1 External Data Representation 4029 Standard (XDR) Description", RFC 5662, January 2010. 4031 [10] Black, D., Glasgow, J., and S. Fridella, "Parallel NFS (pNFS) 4032 Block/Volume Layout", RFC 5663, January 2010. 4034 15.2. Informative References 4036 [11] Haynes, T. and D. Noveck, "Network File System (NFS) version 4 4037 Protocol", draft-ietf-nfsv4-rfc3530bis-09 (Work In Progress), 4038 March 2011. 4040 [12] Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik, 4041 "NSDB Protocol for Federated Filesystems", 4042 draft-ietf-nfsv4-federated-fs-protocol (Work In Progress), 4043 2010. 4045 [13] Lentini, J., Everhart, C., Ellard, D., Tewari, R., and M. Naik, 4046 "Administration Protocol for Federated Filesystems", 4047 draft-ietf-nfsv4-federated-fs-admin (Work In Progress), 2010. 4049 [14] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., 4050 Leach, P., and T. Berners-Lee, "Hypertext Transfer Protocol -- 4051 HTTP/1.1", RFC 2616, June 1999. 4053 [15] Postel, J. and J. Reynolds, "File Transfer Protocol", STD 9, 4054 RFC 959, October 1985. 4056 [16] Simpson, W., "PPP Challenge Handshake Authentication Protocol 4057 (CHAP)", RFC 1994, August 1996. 4059 [17] VanDeBogart, S., Frost, C., and E. Kohler, "Reducing Seek 4060 Overhead with Application-Directed Prefetching", Proceedings of 4061 USENIX Annual Technical Conference , June 2009. 4063 [18] Strohm, R., "Chapter 2, Data Blocks, Extents, and Segments, of 4064 Oracle Database Concepts 11g Release 1 (11.1)", January 2011. 4066 [19] Ashdown, L., "Chapter 15, Validating Database Files and 4067 Backups, of Oracle Database Backup and Recovery User's Guide 4068 11g Release 1 (11.1)", August 2008. 4070 [20] McDougall, R. and J. Mauro, "Section 11.4.3, Detecting Memory 4071 Corruption of Solaris Internals", 2007. 4073 [21] Bairavasundaram, L., Goodson, G., Schroeder, B., Arpaci- 4074 Dusseau, A., and R. Arpaci-Dusseau, "An Analysis of Data 4075 Corruption in the Storage Stack", Proceedings of the 6th USENIX 4076 Symposium on File and Storage Technologies (FAST '08) , 2008. 4078 [22] "Section 46.6. Multi-Level Security (MLS) of Deployment Guide: 4079 Deployment, configuration and administration of Red Hat 4080 Enterprise Linux 5, Edition 6", 2011. 4082 [23] Quigley, D. and J. Lu, "Registry Specification for MAC Security 4083 Label Formats", draft-quigley-label-format-registry (work in 4084 progress), 2011. 4086 [24] Eisler, M., "XDR: External Data Representation Standard", 4087 RFC 4506, May 2006. 4089 [25] Wong, T. and J. Wilkes, "My cache or yours? Making storage more 4090 exclusive", Proceedings of the USENIX Annual Technical 4091 Conference , 2002. 4093 [26] Muntz, D. and P. Honeyman, "Multi-level Caching in Distributed 4094 File Systems", Proceedings of USENIX Annual Technical 4095 Conference , 1992. 4097 [27] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA 4098 Considerations Section in RFCs", BCP 26, RFC 5226, May 2008. 4100 [28] Nowicki, B., "NFS: Network File System Protocol specification", 4101 RFC 1094, March 1989. 4103 [29] Callaghan, B., Pawlowski, B., and P. Staubach, "NFS Version 3 4104 Protocol Specification", RFC 1813, June 1995. 4106 [30] Srinivasan, R., "Binding Protocols for ONC RPC Version 2", 4107 RFC 1833, August 1995. 4109 [31] Eisler, M., "NFS Version 2 and Version 3 Security Issues and 4110 the NFS Protocol's Use of RPCSEC_GSS and Kerberos V5", 4111 RFC 2623, June 1999. 4113 [32] Callaghan, B., "NFS URL Scheme", RFC 2224, October 1997. 4115 [33] Shepler, S., "NFS Version 4 Design Considerations", RFC 2624, 4116 June 1999. 4118 [34] Reynolds, J., "Assigned Numbers: RFC 1700 is Replaced by an On- 4119 line Database", RFC 3232, January 2002. 4121 [35] Linn, J., "The Kerberos Version 5 GSS-API Mechanism", RFC 1964, 4122 June 1996. 4124 [36] Shepler, S., Callaghan, B., Robinson, D., Thurlow, R., Beame, 4125 C., Eisler, M., and D. Noveck, "Network File System (NFS) 4126 version 4 Protocol", RFC 3530, April 2003. 4128 Appendix A. Acknowledgments 4130 For the pNFS Access Permissions Check, the original draft was by 4131 Sorin Faibish, David Black, Mike Eisler, and Jason Glasgow. The work 4132 was influenced by discussions with Benny Halevy and Bruce Fields. A 4133 review was done by Tom Haynes. 4135 For the Sharing change attribute implementation details with NFSv4 4136 clients, the original draft was by Trond Myklebust. 4138 For the NFS Server-side Copy, the original draft was by James 4139 Lentini, Mike Eisler, Deepak Kenchammana, Anshul Madan, and Rahul 4140 Iyer. Tom Talpey co-authored an unpublished version of that 4141 document. It was also was reviewed by a number of individuals: 4142 Pranoop Erasani, Tom Haynes, Arthur Lent, Trond Myklebust, Dave 4143 Noveck, Theresa Lingutla-Raj, Manjunath Shankararao, Satyam Vaghani, 4144 and Nico Williams. 4146 For the NFS space reservation operations, the original draft was by 4147 Mike Eisler, James Lentini, Manjunath Shankararao, and Rahul Iyer. 4149 For the sparse file support, the original draft was by Dean 4150 Hildebrand and Marc Eshel. Valuable input and advice was received 4151 from Sorin Faibish, Bruce Fields, Benny Halevy, Trond Myklebust, and 4152 Richard Scheffenegger. 4154 For the Application IO Hints, the original draft was by Dean 4155 Hildebrand, Mike Eisler, Trond Myklebust, and Sam Falkner. Some 4156 early reviwers included Benny Halevy and Pranoop Erasani. 4158 For Labeled NFS, the original draft was by David Quigley, James 4159 Morris, Jarret Lu, and Tom Haynes. Peter Staubach, Trond Myklebust, 4160 Sorrin Faibish, Nico Williams, and David Black also contributed in 4161 the final push to get this accepted. 4163 Appendix B. RFC Editor Notes 4165 [RFC Editor: please remove this section prior to publishing this 4166 document as an RFC] 4168 [RFC Editor: prior to publishing this document as an RFC, please 4169 replace all occurrences of RFCTBD10 with RFCxxxx where xxxx is the 4170 RFC number of this document] 4172 Author's Address 4174 Thomas Haynes 4175 NetApp 4176 9110 E 66th St 4177 Tulsa, OK 74133 4178 USA 4180 Phone: +1 918 307 1415 4181 Email: thomas@netapp.com 4182 URI: http://www.tulsalabs.com