idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 15. -- Found old boilerplate from RFC 3978, Section 5.5 on line 958. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 935. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 942. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 948. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 30, 2005) is 6685 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 3 errors (**), 0 flaws (~~), 2 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NFSv4 Working Group David L. Black 2 Internet Draft Stephen Fridella 3 Expires: June 2006 EMC Corporation 4 December 30, 2005 6 pNFS Block/Volume Layout 7 draft-ietf-nfsv4-pnfs-block-00.txt 9 Status of this Memo 11 By submitting this Internet-Draft, each author represents that 12 any applicable patent or other IPR claims of which he or she is 13 aware have been or will be disclosed, and any of which he or she 14 becomes aware will be disclosed, in accordance with Section 6 of 15 BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html 33 This Internet-Draft will expire in April 2006. 35 Abstract 37 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 38 file data on the storage used by the NFSv4 server. This ability to 39 bypass the server for data access can increase both performance and 40 parallelism, but requires additional client functionality for data 41 access, some of which is dependent on the class of storage used. The 42 main pNFS operations draft specifies storage-class-independent 43 extensions to NFS; this draft specifies the additional extensions 44 (primarily data structures) for use of pNFS with block and volume 45 based storage. 47 Conventions used in this document 49 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 50 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 51 document are to be interpreted as described in RFC-2119 [RFC2119]. 53 Table of Contents 55 1. Introduction...................................................3 56 2. Background and Architecture....................................3 57 2.1. Data Structures: Extents, Extent Lists, and Volumes.......4 58 2.1.1. Layout Requests and Extent Lists.....................8 59 2.1.2. Layout Commits.......................................9 60 2.1.3. Layout Returns.......................................9 61 2.1.4. Client Copy-on-Write Processing.....................10 62 2.1.5. Extents are Permissions.............................11 63 2.1.6. End-of-file Processing..............................12 64 2.2. Volume Identification....................................14 65 3. New Operations Issues.........................................15 66 3.1. Server Controlling Client Access to Block Devices........15 67 3.1.1. Guarantees Provided by Layouts......................15 68 3.1.2. I/O In-flight Issues................................15 69 3.1.3. Crash Recovery Issues...............................15 70 3.2. End-of-File (EOF) Handling issues........................16 71 3.2.1. Truncation..........................................16 72 3.2.2. Extension...........................................17 73 3.2.3. Extension vs. Readable Layouts......................17 74 4. Security Considerations.......................................18 75 5. Conclusions...................................................19 76 6. IANA Considerations...........................................19 77 7. Revision History..............................................19 78 8. Acknowledgments...............................................20 79 9. References....................................................20 80 9.1. Normative References.....................................20 81 9.2. Informative References...................................20 82 Author's Addresses...............................................21 83 Intellectual Property Statement..................................21 84 Disclaimer of Validity...........................................22 85 Copyright Statement..............................................22 86 Acknowledgment...................................................22 88 1. Introduction 90 Figure 1 shows the overall architecture of a pNFS system: 92 +-----------+ 93 |+-----------+ +-----------+ 94 ||+-----------+ | | 95 ||| | NFSv4 + pNFS | | 96 +|| Clients |<------------------------------>| Server | 97 +| | | | 98 +-----------+ | | 99 ||| +-----------+ 100 ||| | 101 ||| | 102 ||| +-----------+ | 103 ||| |+-----------+ | 104 ||+----------------||+-----------+ | 105 |+-----------------||| | | 106 +------------------+|| Storage |------------+ 107 +| Systems | 108 +-----------+ 110 Figure 1 pNFS Architecture 112 The overall approach is that pNFS-enhanced clients obtain sufficient 113 information from the server to enable them to access the underlying 114 storage (on the Storage Systems) directly. See the pNFS portion of 115 [NFSV4.1] for more details. This draft is concerned with access from 116 pNFS clients to Storage Systems over storage protocols based on 117 blocks and volumes, such as the SCSI protocol family (e.g., parallel 118 SCSI, FCP for Fibre Channel, iSCSI, SAS). This class of storage is 119 referred to as block/volume storage. While the Server to Storage 120 System protocol is not of concern for interoperability here, it will 121 typically also be a block/volume protocol when clients use 122 block/volume protocols. 124 2. Background and Architecture 126 The fundamental storage abstraction supported by block/volume storage 127 is a storage volume consisting of a sequential series of fixed size 128 blocks. This can be thought of as a logical disk; it may be realized 129 by the Storage System as a physical disk, a portion of a physical 130 disk or something more complex (e.g., concatenation, striping, RAID, 131 and combinations thereof) involving multiple physical disks or 132 portions thereof. 134 A pNFS layout for this block/volume class of storage is responsible 135 for mapping from an NFS file (or portion of a file) to the blocks of 136 storage volumes that contain the file. The blocks are expressed as 137 extents with 64 bit offsets and lengths using the existing NFSv4 138 offset4 and length4 types. Clients must be able to perform I/O to 139 the block extents without affecting additional areas of storage 140 (especially important for writes), therefore extents MUST be aligned 141 to 512-byte boundaries, and SHOULD be aligned to the block size used 142 by the NFSv4 server in managing the actual filesystem (4 kilobytes 143 and 8 kilobytes are common block sizes). This block size is 144 available as an NFSv4 attribute - see Section 11.4 of [NFSV4.1]. 146 The pNFS operation for requesting a layout (LAYOUTGET) includes the 147 "pnfs_layoutiomode4 iomode" argument which indicates whether the 148 requested layout is for read-only use or read-write use. A read-only 149 layout may contain holes that are read as zero, whereas a read-write 150 layout will contain allocated, but uninitialized storage in those 151 holes (read as zero, can be written by client). This draft also 152 supports client participation in copy on write by providing both 153 read-only and uninitialized storage for the same range in a layout. 154 Reads are initially performed on the read-only storage, with writes 155 going to the uninitialized storage. After the first write that 156 initializes the uninitialized storage, all reads are performed to 157 that now-initialized writeable storage, and the corresponding read- 158 only storage is no longer used. 160 2.1. Data Structures: Extents, Extent Lists, and Volumes 162 A pNFS block layout is a list of extents within a flat array of 512- 163 byte data blocks known as a volume. A volume may correspond to a 164 single logical unit in a SAN, or a more complex aggregation of 165 multiple logical units. The block layout describes both the topology 166 of the volume as well as the individual block extents on the volume 167 that make up the file. Each individual extent MUST be at least 512- 168 byte aligned. 170 enum extentState4 { 172 READ_WRITE_DATA = 0, /* the data located by this extent is valid 173 for reading and writing. */ 175 READ_DATA = 1, /* the data located by this extent is valid 176 for reading only; it may not be written. 177 */ 179 INVALID_DATA = 2, /* the location is valid; the data is 180 invalid. It is a newly (pre-) allocated 181 extent. There is physical space on the 182 volume. */ 184 NONE_DATA = 3, /* the location is invalid. It is a hole in 185 the file. There is no physical space on 186 the volume. */ 188 }; 190 struct pnfs_block_extent { 192 offset4 offset; /* the starting offset in the 193 file */ 195 length4 length; /* the size of the extent */ 197 offset4 storage_offset; /* the starting offset in the 198 volume */ 200 extentState4 es; /* the state of this extent */ 202 }; 204 enum pnfs_block_volume_type { 206 VOLUME_SIMPLE = 0, /* volume maps to a single LU */ 208 VOLUME_SLICE = 1, /* volume is a slice of another volume */ 210 VOLUME_CONCAT = 2, /* volume is a concatenation of multiple 211 volumes */ 213 VOLUME_STRIPE = 3, /* volume is striped across multiple 214 volumes */ 216 }; 217 struct pnfs_block_slice_volume_info { 219 offset4 start; /* block-offset of the start of the 220 slice */ 222 length4 length; /* length of slice in blocks */ 224 pnfs_block_volume volume; /* volume which is sliced */ 226 }; 228 struct pnfs_block_concat_volume_info { 230 pnfs_block_volume volumes<>; /* volumes which are concatenated */ 232 }; 234 struct pnfs_block_stripe_volume_info { 236 length4 stripe_unit; /* size of stripe */ 238 pnfs_block_volume volumes<>; /* volumes which are striped 239 across*/ 241 }; 243 union pnfs_block_volume switch (pnfs_block_volume_type type) { 245 case VOLUME_SIMPLE: 247 pnfs_deviceid4 volume_ID; 249 case VOLUME_SLICE: 251 pnfs_block_slice_volume_info slice_info; 253 case VOLUME_CONCAT: 255 pnfs_block_concat_volume_info concat_info; 257 case VOLUME_STRIPE: 259 pnfs_block_stripe_volume_info stripe_info; 261 }; 262 struct pnfs_block_layout { 264 pnfs_block_volume volume; /* topology of the volume on 265 which file is stored. */ 267 pnfs_block_extent extents<>; /* extents which make up this 268 layout. */ 270 }; 272 The block layout consists of information describing the topology of 273 the logical volume on which the file is stored, followed by a list of 274 extents which map the logical regions of the file to physical 275 locations on the volume. The "pnfs_block_volume" union is a 276 recursive structure that allows arbitrarily complex nested volume 277 structures to be encoded. The types of aggregations that are allowed 278 are stripes, concatenations, and slices. The base case is a volume 279 which maps simply to one logical unit in the SAN, identified by the 280 "pnfs_deviceid4" (see discussion of volume identification, section 281 2.2 below). The "storage_offset" field within each extent identifies 282 a location on the logical volume described by the "volume" field in 283 the layout. The client is responsible for translating this logical 284 offset into an offset on the appropriate underlying SAN logical unit. 286 Each extent maps a logical region of the file onto a portion of the 287 specified logical volume. The file_offset, extent_length, and es 288 fields for an extent returned from the server are always valid. The 289 interpretation of the storage_offset field depends on the value of es 290 as follows: 292 o READ_WRITE_DATA means that storage_offset is valid, and points to 293 valid/initialized data that can be read and written. 295 o READ_DATA means that storage_offset is valid and points to valid/ 296 initialized data which can only be read. Write operations are 297 prohibited; the client may need to request a read-write layout. 299 o INVALID_DATA means that storage_offset is valid, but points to 300 invalid uninitialized data. This data must not be physically read 301 from the disk until it has been initialized. A read request for 302 an INVALID_DATA extent must fill the user buffer with zeros. Write 303 requests must write whole server-sized blocks to the disk with 304 bytes not initialized by the user must be set to zero. Any write 305 to storage in an INVALID_DATA extent changes the written portion 306 of the extent to READ_WRITE_DATA; the pNFS client is responsible 307 for reporting this change via LAYOUTCOMMIT. 309 o NONE_DATA means that storage_offset is not valid, and this extent 310 may not be used to satisfy write requests. Read requests may be 311 satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents 312 are returned by requests for readable extents; they are never 313 returned if the request was for a writeable extent. 315 The extent list lists all relevant extents in increasing order of the 316 file_offset of each extent; any ties are broken by increasing order 317 of the extent state (es). 319 2.1.1. Layout Requests and Extent Lists 321 Each request for a layout specifies at least three parameters: 322 offset, desired size, and minimum size (the desired size is missing 323 from the operations draft - see Section 3). If the status of a 324 request indicates success, the extent list returned must meet the 325 following criteria: 327 o A request for a readable (but not writeable) layout returns only 328 READ_DATA or NONE_DATA extents (but not INVALID_DATA or 329 READ_WRITE_DATA extents). 331 o A request for a writeable layout returns READ_WRITE_DATA or 332 INVALID_DATA extents (but not NONE_DATA extents). It may also 333 return READ_DATA extents only when the offset ranges in those 334 extents are also covered by INVALID_DATA extents to permit writes. 336 o The first extent in the list MUST contain the starting offset. 338 o The total size of extents in the extent list MUST cover at least 339 the minimum size and no more than the desired size. One exception 340 is allowed: the total size MAY be smaller if only readable extents 341 were requested and EOF is encountered. 343 o Extents in the extent list MUST be logically contiguous for a 344 read-only layout. For a read-write layout, the set of writable 345 extents (i.e., excluding READ_DATA extents) MUST be logically 346 contiguous. Every READ_DATA extent in a read-write layout MUST be 347 covered by an INVALID_DATA extent. This overlap of READ_DATA and 348 INVALID_DATA extents is the only permitted extent overlap. 350 o Extents MUST be ordered in the list by starting offset, with 351 READ_DATA extents preceding INVALID_DATA extents in the case of 352 equal file_offsets. 354 2.1.2. Layout Commits 356 struct pnfs_block_layoutupdate { 358 pnfs_block_extent commit_list; /* list of extents to which now 359 contain valid data. */ 361 bool make_version; /* client requests server to 362 create copy-on-write image of 363 this file. */ 365 } 367 The "pnfs_block_layoutupdate" structure is used by the client as the 368 block-protocol specific argument in a LAYOUTCOMMIT operation. The 369 "commit_list" field is an extent list covering regions of the file 370 layout that were previously in the INVALID_DATA state, but have been 371 written by the client and should now be considered in the 372 READ_WRITE_DATA state. It should be noted that the server may be 373 unable to commit regions at a granularity smaller than a file-system 374 block (typically 4KB or 8KB). As noted above, the block-size that 375 the server uses is available as an NFSv4 attribute, and any extents 376 included in the "commit_list" must be aligned on this granularity. 377 If the client believes that its actions have moved the end-of-file 378 into the middle of a block being committed, the client MUST write 379 zeroes from the end-of-file to the end of that block before 380 committing the block. Failure to do so may result in junk 381 (uninitialized data) appearing in that area if the file is 382 subsequently extended by moving the end-of-file. 384 The "make_version" field of the structure is a flag that the client 385 may set to request that the server create a copy-on-write image of 386 the file (see section 2.1.4, below). In anticipation of this 387 operation the client which sets the "make_version" flag in the 388 LAYOUTCOMMIT operation should immediately mark all extents in the 389 layout that is possesses as state READ_DATA. Future writes to the 390 file require a new LAYOUTGET operation to the server with an "iomode" 391 set to LAYOUTIOMODE_RW. 393 2.1.3. Layout Returns 395 struct pnfs_block_layoutreturn { 397 pnfs_block_extent rel_list; /* list of extents the client 398 will no longer use. */ 400 } 401 The "rel_list" field is an extent list covering regions of the file 402 layout that are no longer needed by the client. Including extents in 403 the "rel_list" for a LAYOUTRETURN operation represents an explicit 404 release of resources by the client, usually done for the purpose of 405 avoiding unnecessary CB_LAYOUTRECALL operations in the future. 407 2.1.4. Client Copy-on-Write Processing 409 Distinguishing the READ_WRITE_DATA and READ_DATA extent types 410 combined with the allowed overlap of READ_DATA extents with 411 INVALID_DATA extents allows copy-on-write processing to be done by 412 pNFS clients. In classic NFS, this operation would be done by the 413 server. Since pNFS enables clients to do direct block access, it is 414 useful for clients to participate in copy-on-write operations. All 415 block/volume pNFS clients MUST support this copy-on-write processing. 417 When a client wishes to write data covered by a READ_DATA extent, it 418 MUST have requested a writable layout from the server; that layout 419 will contain INVALID_DATA extents to cover all the data ranges of 420 that layout's READ_DATA extents. More precisely, for any file_offset 421 range covered by one or more READ_DATA extents in a writable layout, 422 the server MUST include one or more INVALID_DATA extents in the 423 layout that cover the same file_offset range. The client MUST 424 logically copy the data from the READ_DATA extent for any partial 425 blocks of file_offset and range, merge in the changes to be written, 426 and write the result to the INVALID_DATA extent for the blocks for 427 that file_offset and range. That is, if entire blocks of data are to 428 be overwritten by an operation, the corresponding READ_DATA blocks 429 need not be fetched, but any partial-block writes must be merged with 430 data fetched via READ_DATA extents before storing the result via 431 INVALID_DATA extents. For the purposes of this discussion, "entire 432 blocks" and "partial blocks" refer to the server's file-system block 433 size. Storing of data in an INVALID_DATA extent converts the written 434 portion of the INVALID_DATA extent to a READ_WRITE_DATA extent; all 435 subsequent reads MUST be performed from this extent; the 436 corresponding portion of the READ_DATA extent MUST NOT be used after 437 storing data in an INVALID_DATA extent. 439 In the LAYOUTCOMMIT operation that normally sends updated layout 440 information back to the server, for writable data, some INVALID_DATA 441 extents may be committed as READ_WRITE_DATA extents, signifying that 442 the storage at the corresponding storage_offset values has been 443 stored into and is now to be considered as valid data to be read. 444 READ_DATA extents need not be sent to the server. For extents that 445 the client receives via LAYOUTGET as INVALID_DATA and returns via 446 LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the 447 READ_DATA mapping for that extent is no longer valid or necessary for 448 that file. 450 2.1.5. Extents are Permissions 452 Layout extents returned to pNFS clients grant permission to read or 453 write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as 454 zeroes), READ_WRITE_DATA and INVALID_DATA are read/write, 455 (INVALID_DATA reads as zeros, any write converts it to 456 READ_WRITE_DATA). This is the only client means of obtaining 457 permission to perform direct I/O to storage devices; a pNFS client 458 MUST NOT perform direct I/O operations that are not permitted by an 459 extent held by the client. Client adherence to this rule places the 460 pNFS server in control of potentially conflicting storage device 461 operations, enabling the server to determine what does conflict and 462 how to avoid conflicts by granting and recalling extents to/from 463 clients. 465 Block/volume class storage devices are not required to perform read 466 and write operations atomically. Overlapping concurrent read and 467 write operations to the same data may cause the read to return a 468 mixture of before-write and after-write data. Overlapping write 469 operations can be worse, as the result could be a mixture of data 470 from the two write operations; this can be particularly nasty if the 471 underlying storage is striped and the operations complete in 472 different orders on different stripes. A pNFS server can avoid these 473 conflicts by implementing a single writer XOR multiple readers 474 concurrency control policy when there are multiple clients who wish 475 to access the same data. This policy SHOULD be implemented when 476 storage devices do not provide atomicity for concurrent read/write 477 and write/write operations to the same data. 479 A client that makes a layout request that conflicts with an existing 480 layout delegation will be rejected with the error 481 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 482 request after a short interval. During this interval the server 483 needs to recall the conflicting portion of the layout delegation from 484 the client that currently holds it. This reject-and-retry approach 485 does not prevent client starvation when there is contention for the 486 layout of a particular file. For this reason a pNFS server SHOULD 487 implement a mechanism to prevent starvation. One possibility is that 488 the server can maintain a queue of rejected layout requests. Each 489 new layout request can be checked to see if it conflicts with a 490 previous rejected request, and if so, the newer request can be 491 rejected. Once the original requesting client retries its request, 492 its entry in the rejected request queue can be cleared, or the entry 493 in the rejected request queue can be removed when it reaches a 494 certain age. 496 NFSv4 supports mandatory locks and share reservations. These are 497 mechanisms that clients can use to restrict the set of I/O operations 498 that are permissible to other clients. Since all I/O operations 499 ultimately arrive at the NFSv4 server for processing, the server is 500 in a position to enforce these restrictions. However, with pNFS 501 layout delegations, I/Os will be issued from the clients that hold 502 the delegations directly to the storage devices that host the data. 503 These devices have no knowledge of files, mandatory locks, or share 504 reservations, and are not in a position to enforce such restrictions. 505 For this reason the NFSv4 server MUST NOT grant layout delegations 506 that conflict with mandatory locks or share reservations. Further, 507 if a conflicting mandatory lock request or a conflicting open request 508 arrives at the server, the server MUST recall the part of the layout 509 delegation in conflict with the request before processing the 510 request. 512 2.1.6. End-of-file Processing 514 To avoid file-system corruption, close coordination between pNFS 515 clients and the server is required when an NFSv4 client changes the 516 end-of-file marker via the SETATTR call. Whenever the end-of-file is 517 set into the middle of a file-system block, the portion of the block 518 which comes after the end-of-file must be zeroed on disk. The pNFS 519 clients and server share the responsibility for this zeroing as 520 follows: 522 2.1.6.1. Server End-of-file Processing 524 When an NFSv4 client changes the end-of-file marker via a SETATTR 525 operation, the server MUST send a CB_SIZECHANGED notification to each 526 pNFS client (with the exception of the client which sent the SETATTR) 527 that holds a layout for the file. Clients process this callback as 528 described in Section 2.1.6.2. 530 The CB_SIZECHANGED notification has the effect of invalidating all 531 data beyond the new end-of-file. Once the server receives a 532 successful response to the CB_SIZECHANGED notification from a client, 533 it will consider any portion of any layout held by the client beyond 534 the new end-of-file to be invalid. This requires that: 536 o READ_WRITE_DATA extents are changed INVALID_DATA extents. 538 o READ_DATA extents are changed to NONE_DATA extents. If this 539 results in any portion of a NONE_DATA extent overlapping an 540 INVALID_DATA extent, that portion of the NONE_DATA extent (which 541 may be the entire extent) is immediately discarded as INVALID_DATA 542 extents are not permitted to overlap NONE_DATA extents. 544 If the new end-of-file is not set on a file-system block boundary, 545 the server must ensure that zeroes are written to the partial block 546 from the new end-of-file to the end of the block containing it. If a 547 pNFS client holds a writeable layout covering that block, that pNFS 548 client is expected to perform this function, but the CB_SIZECHANGED 549 callback response needs to have a way for the client to communicate 550 that it has done so (see Section 3.2.2). If no client performs this 551 function, the server must do the zeroing, including recalling all 552 layouts (readable and writable) for that block. 554 2.1.6.2. Client End-of-file Processing 556 When a pNFS client receives the CB_SIZECHANGED notification from the 557 server, or when a pNFS client issues a SETATTR operation to set the 558 end-of-file marker for a file on which it holds a layout, it should 559 proceed the same way. First, it should consider any portions of any 560 readable layout beyond the new end-of-file to be in the NONE_DATA 561 state and it should immediately stop servicing read requests from 562 such extents. If the client holds a readable layout for the block 563 containing a new end-of-file that is not at a block boundary, it 564 SHOULD return at least that block before replying to the callback; 565 this avoids a callback from the server in order to zero the partial 566 block beyond the new end-of-file (how to do this is an open issue - 567 see Section 3.2.3). Next, the client should consider any portions of 568 a writeable layout beyond the new end-of-file to be in the 569 INVALID_DATA state, and should henceforth return zeroes in response 570 to any read requests for these extents. Data can, of course, be 571 rewritten to these extents, turning them back into READ_WRITE_DATA 572 extents, as desired. Finally, if the new end-of-file marker is not 573 on a file-system block boundary, and if the client has a writeable 574 layout which covers the block containing the new end-of-file, then 575 the client MUST zero-fill the portion of the block after the end-of- 576 file marker and write this block to the storage before responding to 577 the CB_SIZECHANGED notification. In this case the new end-of-file 578 block MUST be considered to be in the READ_WRITE_DATA state. 580 2.2. Volume Identification 582 Storage Systems such as storage arrays can have multiple physical 583 network ports that need not be connected to a common network, 584 resulting in a pNFS client having simultaneous multipath access to 585 the same storage volumes via different ports on different networks. 586 The networks may not even be the same technology - for example, 587 access to the same volume via both iSCSI and Fibre Channel is 588 possible, hence network address are difficult to use for volume 589 identification. For this reason, this pNFS block layout identifies 590 storage volumes by content, for example providing the means to match 591 (unique portions of) labels used by volume managers. Any block pNFS 592 system using this layout MUST support a means of content-based unique 593 volume identification that can be employed via the data structure 594 given here. 596 struct sigComponent { /* disk signature component */ 598 offset4 sig_offset; /* byte offset of component */ 600 length4 sig_length; /* byte length of component */ 602 opaque contents<>; /* contents of this component of the 603 signature (this is opaque) */ 605 }; 607 struct pnfs_block_deviceaddr4 { 609 sigComponent ds; /* disk signature */ 611 }; 613 A volume is content-identified by a disk signature made up of extents 614 within blocks and contents that must match. The 615 "pnfs_block_deviceaddr4" structure is returned by the server as the 616 storage-protocol-specific opaque field in the "pnfs_deviceaddr4" 617 structure, in response to the GETDEVICEINFO or GETDEVICELIST 618 operations. Note that the opaque "contents" field in the 619 "sigComponent" structure MUST NOT be interpreted as a zero-terminated 620 string, as it may contain embedded zero-valued octets. It contains 621 exactly sig_length octets. There are no restrictions on alignment 622 (e.g., neither sig_offset nor sig_length are required to be multiples 623 of 4). 625 3. New Operations Issues 627 This section collects issues in the [NFVV4.1] draft encountered in 628 writing this block/volume layout draft. 630 3.1. Server Controlling Client Access to Block Devices 632 Typically, SAN disk arrays and SAN protocols provide access control 633 mechanisms (access-logics, lun masking, etc.) which operate at the 634 granularity of individual hosts. The functionality provided by such 635 mechanisms makes it possible for the server to "fence" individual 636 client machines from certain physical disks---that is to say, to 637 prevent individual client machines from reading or writing to certain 638 physical disks. Finer-grained access control methods are not 639 generally available. This affects the ability of the block/volume 640 storage protocol to meet the requirements set out in the [NFSV4.1] 641 draft in the following ways: 643 3.1.1. Guarantees Provided by Layouts 645 See first paragraph of Security Considerations (Section 4). 647 3.1.2. I/O In-flight Issues 649 See third paragraph of Security Considerations (Section 4). 651 3.1.3. Crash Recovery Issues 653 The main [NFSV4.1] draft does not currently include a "reclaim bit" 654 in the LAYOUTGET operation. This means that after a server crash, 655 during the server's recovery grace period, a client cannot be 656 guaranteed that it will be able to reclaim a writable layout that it 657 held before the crash (due to possibly conflicting requests from 658 other clients), nor can the client be guaranteed that a writeable 659 layout that it reclaims will have the same on-disk layout as the one 660 it held prior to the crash. There is one possible scenario where 661 this lack of guarantees could lead to data corruption. 663 The problematic case is when the server crashes while the client 664 holds a writable layout, but has yet to commit some allocated block 665 extents back to the server with a LAYOUTCOMMIT operation. If the 666 data that has been written to these extents is still cached by the 667 client, the client can simply re-write the data via NFSv4, once the 668 server has come back online. However, the uncommitted extents may 669 represent more data than can fit in the client's cache at one time. 670 In this case, the data cannot be rewritten to the server---indeed, it 671 cannot even be safely re-read from storage in the absence of a pNFS 672 layout which covers those disk blocks. The ideal solution, from the 673 client's perspective, would be a method that allows the client to 674 reliably reclaim the previously held layout from the server. For 675 this, the client would need a "reclaim bit" in the LAYOUTGET 676 operation to allow the server to recognize, and give priority to, a 677 reclaim request during the server's recovery grace period, as well as 678 a way of supplying an extent list indicating the blocks (including 679 storage location) that were previously held in the writable layout. 681 Without this support, the server would need to make sure that 682 provisionally allocated extents that are supplied in response to 683 writable LAYOUTGET commands are persistently stored and recoverable 684 in the face of server crashes. Otherwise, the client would have no 685 guarantee that the new writable layout it receives in response to a 686 reclaim request covers the same disk blocks as the old layout it held 687 before the server crash. The "reclaim bit" could be implemented as 688 another iomode (e.g., LAYOUTIOMODE_RW_RECOVER). If this is not done, 689 the result may be that the amount of written-but-uncomitted data held 690 by a client may need to be limited to the client's cache size, 691 resulting in less effective cache usage and more commit traffic. 693 3.2. End-of-File (EOF) Handling issues 695 The main pNFS draft [NFSV4.1] includes a callback (CB_SIZECHANGED) 696 for a pNFS server to communicate a file size change (new EOF) to pNFS 697 clients. The callback documentation indicates that "the client 698 should update its internal size for the file". This is insufficient 699 in a number of ways described below. Some of these issues are unique 700 to the block/ volume layout as this layout requires that either the 701 client or server zero beyond the new EOF to the end of the block that 702 contains the new EOF (both the file and object layouts can make this 703 the responsibility of the storage server(s)). 705 3.2.1. Truncation 707 There is a race between a pNFS client extending a file via direct 708 pNFS writes and an ordinary NFS client truncating the file via a 709 SETATTR - the order of these operations may be visible to the 710 application(s) using NFS, and hence it is necessary that they be done 711 in the right order. List discussion suggests that the absence of 712 operation sequencing in pNFS (in contrast to HighRoad) results in the 713 best approach being to require a recall of the portions of the layout 714 that are invalidated by a truncation so that the server can perform 715 the truncation (this MUST include the entire layout beyond the new 716 EOF). For the block/volume layout, the server is responsible for 717 zeroing the partial block beyond the new EOF in this case. 719 3.2.2. Extension 721 When extending a file via SETATTR, a similar race can arise between 722 that SETATTR and direct pNFS writes. When CB_SIZECHANGED is used to 723 pass the new end-of-file to a client that holds a writable layout for 724 the block containing the new end-of-file, and that new end-of-file is 725 not at a filesystem block boundary, it is desirable for the client 726 holding the writable layout to take responsibility for zeroing the 727 partial block beyond end-of file. Note that the client may not be 728 able to do so in all cases, as the client may have been in the 729 process of returning that portion of the layout when the callback 730 arrives, or the server and client may not agree on whether the client 731 holds a writable layout containing that block. Hence the 732 CB_SIZECHANGED reply needs to carry an indication from the client as 733 to whether the client has zeroed that partial block. 735 Open Issue: How to convey that indication. The least intrusive way 736 to do this may be to define an NFS Error that indicates that the 737 callback was successful and the client zeroed the resulting partial 738 block. 740 Open Issue: Can callback reply be delayed until this write is done? 741 Immediate callback reply with promise to zero the partial block 742 followed by client crash results in potential data corruption due to 743 failure to zero the partial block (junk appears if the file is 744 extended by moving end-of-file forward). 746 3.2.3. Extension vs. Readable Layouts 748 There is an analogous issue for a client holding a readable layout 749 (READ_DATA) including the block that contains a new non-block-aligned 750 end-of-file. In this case, the client cannot zero the partial block 751 itself, as it does not hold a writable layout, but if it retains the 752 readable layout, the server is likely to immediately issue a recall 753 callback to revoke the readable layout before the server writes the 754 zeroes. This second callback should be avoided. 756 Issue: How? Allowing an arbitrary return in the response to 757 CB_SIZECHANGED would get the job done, but is only needed by the 758 block/volume layout, and complicates the end-of-file change code on 759 the server by potentially returning unrelated areas of the layout. 760 Delaying the callback reply to allow the client to do a RETURN helps 761 some (2 round-trips instead of 3 if the recall were used) at the cost 762 of delaying the recall reply. Best bet may be to enlarge the 763 CB_SIZECHANGED callback so the server can tell the client what needs 764 to be returned (in the readable layout case, the server asks for one 765 block) and for the client to accept/reject that (client accepts if it 766 can do so immediately - it would reject if it needed to do a commit); 767 that should yield one round-trip with the server getting exactly what 768 it needs back (and server MUST NOT use this to recall a writable 769 layout). 771 4. Security Considerations 773 Certain security responsibilities are delegated to pNFS clients. 774 Block/volume storage systems generally control access at a volume 775 granularity, and hence pNFS clients have to be trusted to only 776 perform accesses allowed by the layout extents they currently hold 777 (e.g., and not access storage for files on which a layout extent is 778 not held). In general, the server will not be able to prevent a 779 client which holds a layout for a file from accessing parts of the 780 physical disk not covered by the layout. Similarly, the server will 781 not be able to prevent a client from accessing blocks covered by a 782 layout that it has already returned. This block-based level of 783 protection must be provided by the client software. In environments 784 where the security requirements are such that client-side protection 785 from access to storage outside of the layout is not sufficient, pNFS 786 block/volume storage layouts for pNFS SHOULD NOT be used. 788 This also has implications for some NFSv4 functionality outside pNFS. 789 For instance, if a file is covered by a mandatory read-only lock, the 790 server can ensure that only readable layouts for the file are granted 791 to pNFS clients. However, it is up to each pNFS client to ensure 792 that the readable layout is used only to service read requests, and 793 not to allow writes to the existing parts of the file. Since 794 block/volume storage systems are generally not capable of enforcing 795 such file-based security, in environments where pNFS clients cannot 796 be trusted to enforce such policies, pNFS block/volume storage 797 layouts SHOULD NOT be used. 799 When a layout lease timer expires, it is important for the sake of 800 correctness that any in-flight I/Os that the client issued before the 801 expiration of the timer are rejected at the storage. For the 802 block/volume protocol, this is possible by fencing a client with an 803 expired layout timer from the physical storage. Note, however, that 804 the granularity of this operation can only be at the host/logical- 805 unit level. Thus, if a client's lease timer expires for a single 806 layout, it will effectively render useless *all* of the clients 807 layouts for files in the containing filesystem. 809 Access to block/volume storage is logically at a lower layer of the 810 I/O stack than NFSv4, and hence NFSv4 security is not directly 811 applicable to protocols that access such storage directly. Depending 812 on the protocol, some of the security mechanisms provided by NFSv4 813 (e.g., encryption, cryptographic integrity) may not be available, or 814 may be provided via different means. At one extreme, pNFS with 815 block/volume storage can be used with storage access protocols (e.g., 816 parallel SCSI) that provide essentially no security functionality. 817 At the other extreme, pNFS may be used with storage protocols such as 818 iSCSI that provide significant functionality. It is the 819 responsibility of those administering and deploying pNFS with a 820 block/volume storage access protocol to ensure that appropriate 821 protection is provided to that protocol (physical security is a 822 common means for protocols not based on IP). In environments where 823 the security requirements for the storage protocol cannot be met, 824 pNFS block/volume storage layouts SHOULD NOT be used. 826 When security is available for a storage protocol, it is generally at 827 a different granularity and with a different notion of identity than 828 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 829 initiator access to volumes). The responsibility for enforcing 830 appropriate correspondences between these security layers is placed 831 upon the pNFS client. As with the issues in the first paragraph of 832 this section, in environments where the security requirements are 833 such that client-side protection from access to storage outside of 834 the layout is not sufficient, pNFS block/volume storage layouts 835 SHOULD NOT be used. 837 5. Conclusions 839 This draft specifies the block/volume layout type for pNFS and 840 associated functionality. 842 6. IANA Considerations 844 There are no IANA considerations in this document. All pNFS IANA 845 Considerations are covered in [NFSV4.1]. 847 7. Revision History 849 -00: Initial Version as draft-black-pnfs-block-00 851 -01: Rework discussion of extents as locks to talk about extents 852 granting access permissions. Rewrite operation ordering section to 853 discuss deadlocks and races that can cause problems. Add new section 854 on recall completion. Add client copy-on-write based on text from 855 Craig Everhart. 857 -02: Fix glitches in extent state descriptions. Describe most issues 858 as RESOLVED. Most of Section 3 has been incorporated into the the 859 main PNFD draft, add NOTE to that effect and say that it will be 860 deleted in the next version of this draft (which should be a draft- 861 ietf-nfsv4 draft). Cleaning up a number of things have been left to 862 that draft revision, including the interlocks with the types in the 863 main pNFS draft, layout striping support, and finishing the Security 864 Considerations section. 866 -00: New version as draft-ietf-nfsv4-pnfs-block. Removed resolved 867 operations issues (Section 3). Align types with main pNFS draft 868 (which is now part of the NFSv4.1 minor version draft), add volume 869 striping and slicing support. New operations issues are in Section 3 870 - the need for a "reclaim bit" and EOF concerns are the two major 871 issues. Extended and improved the Security Considerations section, 872 but it still needs work. Added 1-sentence conclusion that also still 873 needs work. 875 8. Acknowledgments 877 This draft draws extensively on the authors' familiarity with the 878 mapping functionality and protocol in EMC's HighRoad system 879 [HighRoad]. The protocol used by HighRoad is called FMP (File 880 Mapping Protocol); it is an add-on protocol that runs in parallel 881 with filesystem protocols such as NFSv3 to provide pNFS-like 882 functionality for block/volume storage. While drawing on HighRoad 883 FMP, the data structures and functional considerations in this draft 884 differ in significant ways, based on lessons learned and the 885 opportunity to take advantage of NFSv4 features such as COMPOUND 886 operations. The design to support pNFS client participation in copy- 887 on-write is based on text and ideas contributed by Craig Everhart 888 (formerly with IBM). 890 9. References 892 9.1. Normative References 894 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 895 Requirement Levels", BCP 14, RFC 2119, March 1997. 897 [NFSV4.1] Shepler, S., ed., "NFSv4 Minor Version 1", draft-ietf- 898 nfsv4-minorversion1-01.txt, Work in Progress, December 899 2005. 901 9.2. Informative References 903 [HighRoad] EMC Corporation, "EMC Celerra HighRoad", EMC C819.1 white 904 paper, available at: 905 http://www.emc.com/pdf/products/celerra_file_server/HighRoad_wp.pdf 906 link checked 30 December 2005. 908 Author's Addresses 910 David L. Black 911 EMC Corporation 912 176 South Street 913 Hopkinton, MA 01748 915 Phone: +1 (508) 293-7953 916 Email: black_david@emc.com 918 Stephen Fridella 919 EMC Corporation 920 32 Coslin Drive 921 Southboro, MA 01772 923 Phone: +1 (508) 305-8512 924 Email: fridella_stephen@emc.com 926 Intellectual Property Statement 928 The IETF takes no position regarding the validity or scope of any 929 Intellectual Property Rights or other rights that might be claimed to 930 pertain to the implementation or use of the technology described in 931 this document or the extent to which any license under such rights 932 might or might not be available; nor does it represent that it has 933 made any independent effort to identify any such rights. Information 934 on the procedures with respect to rights in RFC documents can be 935 found in BCP 78 and BCP 79. 937 Copies of IPR disclosures made to the IETF Secretariat and any 938 assurances of licenses to be made available, or the result of an 939 attempt made to obtain a general license or permission for the use of 940 such proprietary rights by implementers or users of this 941 specification can be obtained from the IETF on-line IPR repository at 942 http://www.ietf.org/ipr. 944 The IETF invites any interested party to bring to its attention any 945 copyrights, patents or patent applications, or other proprietary 946 rights that may cover technology that may be required to implement 947 this standard. Please address the information to the IETF at ietf- 948 ipr@ietf.org. 950 Disclaimer of Validity 952 This document and the information contained herein are provided on an 953 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 954 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 955 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 956 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 957 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 958 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 960 Copyright Statement 962 Copyright (C) The Internet Society (2005). 964 This document is subject to the rights, licenses and restrictions 965 contained in BCP 78, and except as set forth therein, the authors 966 retain all their rights. 968 Acknowledgment 970 Funding for the RFC Editor function is currently provided by the 971 Internet Society.