idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 15. -- Found old boilerplate from RFC 3978, Section 5.5 on line 822. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 799. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 806. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 812. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 30, 2006) is 6446 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 3 errors (**), 0 flaws (~~), 2 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NFSv4 Working Group David L. Black 2 Internet Draft Stephen Fridella 3 Expires: February 28, 2007 EMC Corporation 4 August 30, 2006 6 pNFS Block/Volume Layout 7 draft-ietf-nfsv4-pnfs-block-01.txt 9 Status of this Memo 11 By submitting this Internet-Draft, each author represents that 12 any applicable patent or other IPR claims of which he or she is 13 aware have been or will be disclosed, and any of which he or she 14 becomes aware will be disclosed, in accordance with Section 6 of 15 BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html 33 This Internet-Draft will expire in February 2007. 35 Abstract 37 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 38 file data on the storage used by the NFSv4 server. This ability to 39 bypass the server for data access can increase both performance and 40 parallelism, but requires additional client functionality for data 41 access, some of which is dependent on the class of storage used. The 42 main pNFS operations draft specifies storage-class-independent 43 extensions to NFS; this draft specifies the additional extensions 44 (primarily data structures) for use of pNFS with block and volume 45 based storage. 47 Conventions used in this document 49 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 50 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 51 document are to be interpreted as described in RFC-2119 [RFC2119]. 53 Table of Contents 55 1. Introduction...................................................3 56 2. Block Layout Description.......................................3 57 2.1. Background and Architecture...............................3 58 2.2. Data Structures: Extents and Extent Lists.................4 59 2.2.1. Layout Requests and Extent Lists.....................6 60 2.2.2. Layout Commits.......................................7 61 2.2.3. Layout Returns.......................................8 62 2.2.4. Client Copy-on-Write Processing......................9 63 2.2.5. Extents are Permissions.............................10 64 2.2.6. End-of-file Processing..............................11 65 2.3. Volume Identification....................................12 66 2.4. Crash Recovery Issues....................................14 67 3. Security Considerations.......................................14 68 4. Conclusions...................................................16 69 5. IANA Considerations...........................................16 70 6. Revision History..............................................16 71 7. Acknowledgments...............................................17 72 8. References....................................................17 73 8.1. Normative References.....................................17 74 8.2. Informative References...................................18 75 Author's Addresses...............................................18 76 Intellectual Property Statement..................................18 77 Disclaimer of Validity...........................................19 78 Copyright Statement..............................................19 79 Acknowledgment...................................................19 81 1. Introduction 83 Figure 1 shows the overall architecture of a pNFS system: 85 +-----------+ 86 |+-----------+ +-----------+ 87 ||+-----------+ | | 88 ||| | NFSv4 + pNFS | | 89 +|| Clients |<------------------------------>| Server | 90 +| | | | 91 +-----------+ | | 92 ||| +-----------+ 93 ||| | 94 ||| | 95 ||| +-----------+ | 96 ||| |+-----------+ | 97 ||+----------------||+-----------+ | 98 |+-----------------||| | | 99 +------------------+|| Storage |------------+ 100 +| Systems | 101 +-----------+ 103 Figure 1 pNFS Architecture 105 The overall approach is that pNFS-enhanced clients obtain sufficient 106 information from the server to enable them to access the underlying 107 storage (on the Storage Systems) directly. See the pNFS portion of 108 [NFSV4.1] for more details. This draft is concerned with access from 109 pNFS clients to Storage Systems over storage protocols based on 110 blocks and volumes, such as the SCSI protocol family (e.g., parallel 111 SCSI, FCP for Fibre Channel, iSCSI, SAS). This class of storage is 112 referred to as block/volume storage. While the Server to Storage 113 System protocol is not of concern for interoperability here, it will 114 typically also be a block/volume protocol when clients use block/ 115 volume protocols. 117 2. Block Layout Description 119 2.1. Background and Architecture 121 The fundamental storage abstraction supported by block/volume storage 122 is a storage volume consisting of a sequential series of fixed size 123 blocks. This can be thought of as a logical disk; it may be realized 124 by the Storage System as a physical disk, a portion of a physical 125 disk or something more complex (e.g., concatenation, striping, RAID, 126 and combinations thereof) involving multiple physical disks or 127 portions thereof. 129 A pNFS layout for this block/volume class of storage is responsible 130 for mapping from an NFS file (or portion of a file) to the blocks of 131 storage volumes that contain the file. The blocks are expressed as 132 extents with 64 bit offsets and lengths using the existing NFSv4 133 offset4 and length4 types. Clients must be able to perform I/O to 134 the block extents without affecting additional areas of storage 135 (especially important for writes), therefore extents MUST be aligned 136 to 512-byte boundaries, and SHOULD be aligned to the block size used 137 by the NFSv4 server in managing the actual filesystem (4 kilobytes 138 and 8 kilobytes are common block sizes). This block size is 139 available as an NFSv4 attribute - see Section 11.4 of [NFSV4.1]. 141 The pNFS operation for requesting a layout (LAYOUTGET) includes the 142 "pnfs_layoutiomode4 iomode" argument which indicates whether the 143 requested layout is for read-only use or read-write use. A read-only 144 layout may contain holes that are read as zero, whereas a read-write 145 layout will contain allocated, but uninitialized storage in those 146 holes (read as zero, can be written by client). This draft also 147 supports client participation in copy on write by providing both 148 read-only and uninitialized storage for the same range in a layout. 149 Reads are initially performed on the read-only storage, with writes 150 going to the uninitialized storage. After the first write that 151 initializes the uninitialized storage, all reads are performed to 152 that now-initialized writeable storage, and the corresponding read- 153 only storage is no longer used. 155 2.2. Data Structures: Extents and Extent Lists 157 A pNFS block layout is a list of extents within a flat array of 512- 158 byte data blocks known as a volume. A volume may correspond to a 159 single logical unit in a SAN, or a more complex aggregation of 160 multiple logical units. The details of the volume topology can be 161 determined by using the GETDEVICEINFO or GETDEVICELIST operation (see 162 discussion of volume identification, section 2.3 below). The block 163 layout describes the individual block extents on the volume that make 164 up the file. Each individual extent MUST be at least 512-byte 165 aligned. 167 enum extentState4 { 169 READ_WRITE_DATA = 0, /* the data located by this extent is valid 170 for reading and writing. */ 172 READ_DATA = 1, /* the data located by this extent is valid 173 for reading only; it may not be written. 174 */ 176 INVALID_DATA = 2, /* the location is valid; the data is 177 invalid. It is a newly (pre-) allocated 178 extent. There is physical space on the 179 volume. */ 181 NONE_DATA = 3, /* the location is invalid. It is a hole in 182 the file. There is no physical space on 183 the volume. */ 185 }; 187 struct pnfs_block_extent { 189 offset4 offset; /* the starting offset in the 190 file */ 192 length4 length; /* the size of the extent */ 194 offset4 storage_offset; /* the starting offset in the 195 volume */ 197 extentState4 es; /* the state of this extent */ 199 }; 201 struct pnfs_block_layout { 203 pnfs_deviceid4 volume; /* logical volume on which file 204 is stored. */ 206 pnfs_block_extent extents<>; /* extents which make up this 207 layout. */ 209 }; 210 The block layout consists of an identifier of the logical volume on 211 which the file is stored, followed by a list of extents which map the 212 logical regions of the file to physical locations on the volume. The 213 "storage_offset" field within each extent identifies a location on 214 the logical volume described by the "volume" field in the layout. 215 The client is responsible for translating this logical offset into an 216 offset on the appropriate underlying SAN logical unit. 218 Each extent maps a logical region of the file onto a portion of the 219 specified logical volume. The file_offset, extent_length, and es 220 fields for an extent returned from the server are always valid. The 221 interpretation of the storage_offset field depends on the value of es 222 as follows: 224 o READ_WRITE_DATA means that storage_offset is valid, and points to 225 valid/initialized data that can be read and written. 227 o READ_DATA means that storage_offset is valid and points to valid/ 228 initialized data which can only be read. Write operations are 229 prohibited; the client may need to request a read-write layout. 231 o INVALID_DATA means that storage_offset is valid, but points to 232 invalid uninitialized data. This data must not be physically read 233 from the disk until it has been initialized. A read request for 234 an INVALID_DATA extent must fill the user buffer with zeros. Write 235 requests must write whole server-sized blocks to the disk with 236 bytes not initialized by the user must be set to zero. Any write 237 to storage in an INVALID_DATA extent changes the written portion 238 of the extent to READ_WRITE_DATA; the pNFS client is responsible 239 for reporting this change via LAYOUTCOMMIT. 241 o NONE_DATA means that storage_offset is not valid, and this extent 242 may not be used to satisfy write requests. Read requests may be 243 satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents 244 are returned by requests for readable extents; they are never 245 returned if the request was for a writeable extent. 247 The extent list lists all relevant extents in increasing order of the 248 file_offset of each extent; any ties are broken by increasing order 249 of the extent state (es). 251 2.2.1. Layout Requests and Extent Lists 253 Each request for a layout specifies at least three parameters: 254 offset, desired size, and minimum size. If the status of a request 255 indicates success, the extent list returned must meet the following 256 criteria: 258 o A request for a readable (but not writeable) layout returns only 259 READ_DATA or NONE_DATA extents (but not INVALID_DATA or 260 READ_WRITE_DATA extents). 262 o A request for a writeable layout returns READ_WRITE_DATA or 263 INVALID_DATA extents (but not NONE_DATA extents). It may also 264 return READ_DATA extents only when the offset ranges in those 265 extents are also covered by INVALID_DATA extents to permit writes. 267 o The first extent in the list MUST contain the starting offset. 269 o The total size of extents in the extent list MUST cover at least 270 the minimum size and no more than the desired size. One exception 271 is allowed: the total size MAY be smaller if only readable extents 272 were requested and EOF is encountered. 274 o Extents in the extent list MUST be logically contiguous for a 275 read-only layout. For a read-write layout, the set of writable 276 extents (i.e., excluding READ_DATA extents) MUST be logically 277 contiguous. Every READ_DATA extent in a read-write layout MUST be 278 covered by an INVALID_DATA extent. This overlap of READ_DATA and 279 INVALID_DATA extents is the only permitted extent overlap. 281 o Extents MUST be ordered in the list by starting offset, with 282 READ_DATA extents preceding INVALID_DATA extents in the case of 283 equal file_offsets. 285 2.2.2. Layout Commits 287 struct pnfs_block_layoutupdate { 289 pnfs_block_extent commit_list<>; /* list of extents to which now 290 contain valid data. */ 292 bool make_version; /* client requests server to 293 create copy-on-write image of 294 this file. */ 296 } 298 The "pnfs_block_layoutupdate" structure is used by the client as the 299 block-protocol specific argument in a LAYOUTCOMMIT operation. The 300 "commit_list" field is an extent list covering regions of the file 301 layout that were previously in the INVALID_DATA state, but have been 302 written by the client and should now be considered in the 303 READ_WRITE_DATA state. It should be noted that the server may be 304 unable to commit regions at a granularity smaller than a file-system 305 block (typically 4KB or 8KB). As noted above, the block-size that 306 the server uses is available as an NFSv4 attribute, and any extents 307 included in the "commit_list" must be aligned on this granularity. 308 If the client believes that its actions have moved the end-of-file 309 into the middle of a block being committed, the client MUST write 310 zeroes from the end-of-file to the end of that block before 311 committing the block. Failure to do so may result in junk 312 (uninitialized data) appearing in that area if the file is 313 subsequently extended by moving the end-of-file. 315 The "make_version" field of the structure is a flag that the client 316 may set to request that the server create a copy-on-write image of 317 the file (pNFS clients may be involved in this operation - see 318 section 2.2.4, below). In anticipation of this operation the client 319 which sets the "make_version" flag in the LAYOUTCOMMIT operation 320 should immediately mark all extents in the layout that is possesses 321 as state READ_DATA. Future writes to the file require a new 322 LAYOUTGET operation to the server with an "iomode" set to 323 LAYOUTIOMODE_RW. 325 2.2.3. Layout Returns 327 struct pnfs_block_layoutreturn { 329 pnfs_block_extent rel_list<>; /* list of extents the client 330 will no longer use. */ 332 } 334 The "rel_list" field is an extent list covering regions of the file 335 layout that are no longer needed by the client. Including extents in 336 the "rel_list" for a LAYOUTRETURN operation represents an explicit 337 release of resources by the client, usually done for the purpose of 338 avoiding unnecessary CB_LAYOUTRECALL operations in the future. 340 Note that the block/volume layout supports unilateral layout 341 revocation. When a layout is unilaterally revoked by the server, 342 usually due to the client's lease timer expiring or the client 343 failing to return a layout in a timely manner, it is important for 344 the sake of correctness that any in-flight I/Os that the client 345 issued before the layout was revoked are rejected at the storage. 346 For the block/volume protocol, this is possible by fencing a client 347 with an expired layout timer from the physical storage. Note, 348 however, that the granularity of this operation can only be at the 349 host/logical-unit level. Thus, if one of a client's layouts is 350 unilaterally revoked by the server, it will effectively render 351 useless *all* of the client's layouts for files in the same 352 filesystem. 354 2.2.4. Client Copy-on-Write Processing 356 Distinguishing the READ_WRITE_DATA and READ_DATA extent types in 357 combination with the allowed overlap of READ_DATA extents with 358 INVALID_DATA extents allows copy-on-write processing to be done by 359 pNFS clients. In classic NFS, this operation would be done by the 360 server. Since pNFS enables clients to do direct block access, it is 361 useful for clients to participate in copy-on-write operations. All 362 block/volume pNFS clients MUST support this copy-on-write processing. 364 When a client wishes to write data covered by a READ_DATA extent, it 365 MUST have requested a writable layout from the server; that layout 366 will contain INVALID_DATA extents to cover all the data ranges of 367 that layout's READ_DATA extents. More precisely, for any file_offset 368 range covered by one or more READ_DATA extents in a writable layout, 369 the server MUST include one or more INVALID_DATA extents in the 370 layout that cover the same file_offset range. When performing a write 371 to such an area of a layout, the client MUST effectively copy the 372 data from the READ_DATA extent for any partial blocks of file_offset 373 and range, merge in the changes to be written, and write the result 374 to the INVALID_DATA extent for the blocks for that file_offset and 375 range. That is, if entire blocks of data are to be overwritten by an 376 operation, the corresponding READ_DATA blocks need not be fetched, 377 but any partial-block writes must be merged with data fetched via 378 READ_DATA extents before storing the result via INVALID_DATA extents. 379 For the purposes of this discussion, "entire blocks" and "partial 380 blocks" refer to the server's file-system block size. Storing of 381 data in an INVALID_DATA extent converts the written portion of the 382 INVALID_DATA extent to a READ_WRITE_DATA extent; all subsequent reads 383 MUST be performed from this extent; the corresponding portion of the 384 READ_DATA extent MUST NOT be used after storing data in an 385 INVALID_DATA extent. 387 In the LAYOUTCOMMIT operation that normally sends updated layout 388 information back to the server, for writable data, some INVALID_DATA 389 extents may be committed as READ_WRITE_DATA extents, signifying that 390 the storage at the corresponding storage_offset values has been 391 stored into and is now to be considered as valid data to be read. 392 READ_DATA extents are not committed to the server. For extents that 393 the client receives via LAYOUTGET as INVALID_DATA and returns via 394 LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the 395 READ_DATA mapping for that extent is no longer valid or necessary for 396 that file. 398 2.2.5. Extents are Permissions 400 Layout extents returned to pNFS clients grant permission to read or 401 write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as 402 zeroes), READ_WRITE_DATA and INVALID_DATA are read/write, 403 (INVALID_DATA reads as zeros, any write converts it to 404 READ_WRITE_DATA). This is the only client means of obtaining 405 permission to perform direct I/O to storage devices; a pNFS client 406 MUST NOT perform direct I/O operations that are not permitted by an 407 extent held by the client. Client adherence to this rule places the 408 pNFS server in control of potentially conflicting storage device 409 operations, enabling the server to determine what does conflict and 410 how to avoid conflicts by granting and recalling extents to/from 411 clients. 413 Block/volume class storage devices are not required to perform read 414 and write operations atomically. Overlapping concurrent read and 415 write operations to the same data may cause the read to return a 416 mixture of before-write and after-write data. Overlapping write 417 operations can be worse, as the result could be a mixture of data 418 from the two write operations; this can be particularly nasty if the 419 underlying storage is striped and the operations complete in 420 different orders on different stripes. A pNFS server can avoid these 421 conflicts by implementing a single writer XOR multiple readers 422 concurrency control policy when there are multiple clients who wish 423 to access the same data. This policy SHOULD be implemented when 424 storage devices do not provide atomicity for concurrent read/write 425 and write/write operations to the same data. 427 A client that makes a layout request that conflicts with an existing 428 layout delegation will be rejected with the error 429 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 430 request after a short interval. During this interval the server 431 needs to recall the conflicting portion of the layout delegation from 432 the client that currently holds it. This reject-and-retry approach 433 does not prevent client starvation when there is contention for the 434 layout of a particular file. For this reason a pNFS server SHOULD 435 implement a mechanism to prevent starvation. One possibility is that 436 the server can maintain a queue of rejected layout requests. Each 437 new layout request can be checked to see if it conflicts with a 438 previous rejected request, and if so, the newer request can be 439 rejected. Once the original requesting client retries its request, 440 its entry in the rejected request queue can be cleared, or the entry 441 in the rejected request queue can be removed when it reaches a 442 certain age. 444 NFSv4 supports mandatory locks and share reservations. These are 445 mechanisms that clients can use to restrict the set of I/O operations 446 that are permissible to other clients. Since all I/O operations 447 ultimately arrive at the NFSv4 server for processing, the server is 448 in a position to enforce these restrictions. However, with pNFS 449 layout delegations, I/Os will be issued from the clients that hold 450 the delegations directly to the storage devices that host the data. 451 These devices have no knowledge of files, mandatory locks, or share 452 reservations, and are not in a position to enforce such restrictions. 453 For this reason the NFSv4 server MUST NOT grant layout delegations 454 that conflict with mandatory locks or share reservations. Further, 455 if a conflicting mandatory lock request or a conflicting open request 456 arrives at the server, the server MUST recall the part of the layout 457 delegation in conflict with the request before processing the 458 request. 460 2.2.6. End-of-file Processing 462 The end-of-file location can be changed in two ways: implicitly as 463 the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 464 or explicitly as the result of a SETATTR request. Typically, when a 465 file is truncated by an NFSv4 client via the SETATTR call, the server 466 frees any disk blocks belonging to the file which are beyond the new 467 end-of-file byte, and may write zeros to the portion of the new end- 468 of-file block beyond the new end-of-file byte. These actions render 469 any pNFS layouts which refer to the blocks that are freed or written 470 semantically invalid. Therefore, the server MUST recall from clients 471 the portions of any pNFS layouts which refer to blocks that will be 472 freed or written by the server before processing the truncate 473 request. These recalls may take time to complete; as explained in 474 [NFSv4.1], if the server cannot respond to the client SETATTR request 475 in a reasonable amount of time, it SHOULD reply to the client with 476 the error NFS4ERR_DELAY. 478 Blocks in the INVALID_DATA state which lie beyond the new end-of-file 479 block present a special case. The server has reserved these blocks 480 for use by a pNFS client with a writable layout for the file, but the 481 client has yet to commit the blocks, and they are not yet a part of 482 the file mapping on disk. The server MAY free these blocks while 483 processing the SETATTR request. If so, the server MUST recall any 484 layouts from pNFS clients which refer to the blocks before processing 485 the truncate. If the server does not free the INVALID_DATA blocks 486 while processing the SETATTR request, it need not recall layouts 487 which refer only to the INVALID DATA blocks. 489 When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 490 the current end-of-file, or extended explicitly by a SETATTR request, 491 the server need not recall any portions of any pNFS layouts. 493 2.3. Volume Identification 495 Storage Systems such as storage arrays can have multiple physical 496 network ports that need not be connected to a common network, 497 resulting in a pNFS client having simultaneous multipath access to 498 the same storage volumes via different ports on different networks. 499 The networks may not even be the same technology - for example, 500 access to the same volume via both iSCSI and Fibre Channel is 501 possible, hence network address are difficult to use for volume 502 identification. For this reason, this pNFS block layout identifies 503 storage volumes by content, for example providing the means to match 504 (unique portions of) labels used by volume managers. Any block pNFS 505 system using this layout MUST support a means of content-based unique 506 volume identification that can be employed via the data structure 507 given here. 509 struct sigComponent { /* disk signature component */ 511 offset4 sig_offset; /* byte offset of component */ 513 length4 sig_length; /* byte length of component */ 515 opaque contents<>; /* contents of this component of the 516 signature (this is opaque) */ 518 }; 520 enum pnfs_block_volume_type { 522 VOLUME_SIMPLE = 0, /* volume maps to a single LU */ 524 VOLUME_SLICE = 1, /* volume is a slice of another volume */ 526 VOLUME_CONCAT = 2, /* volume is a concatenation of multiple 527 volumes */ 529 VOLUME_STRIPE = 3, /* volume is striped across multiple 530 volumes */ 532 }; 533 struct pnfs_block_slice_volume_info { 535 offset4 start; /* block-offset of the start of the 536 slice */ 538 length4 length; /* length of slice in blocks */ 540 pnfs_deviceid4 volume; /* volume which is sliced */ 542 }; 544 struct pnfs_block_concat_volume_info { 546 pnfs_deviceid4 volumes<>; /* volumes which are concatenated */ 548 }; 550 struct pnfs_block_stripe_volume_info { 552 length4 stripe_unit; /* size of stripe */ 554 pnfs_deviceid4 volumes<>; /* volumes which are striped 555 across*/ 557 }; 559 union pnfs_block_deviceaddr4 switch (pnfs_block_volume_type type) { 561 case VOLUME_SIMPLE: 563 sigComponent ds; /* disk signature */ 565 case VOLUME_SLICE: 567 pnfs_block_slice_volume_info slice_info; 569 case VOLUME_CONCAT: 571 pnfs_block_concat_volume_info concat_info; 573 case VOLUME_STRIPE: 575 pnfs_block_stripe_volume_info stripe_info; 577 default: 579 void; 581 }; 583 The "pnfs_block_deviceaddr4" union is a recursive structure that 584 allows arbitrarily complex nested volume structures to be encoded. 585 The types of aggregations that are allowed are stripes, 586 concatenations, and slices. The base case is a volume which maps 587 simply to one logical unit in the SAN, identified by the 588 "sigComponent" structure. Each SAN logical unit is content- 589 identified by a disk signature made up of extents within blocks and 590 contents that must match. The "pnfs_block_deviceaddr4" union is 591 returned by the server as the storage-protocol-specific opaque field 592 in the "pnfs_deviceaddr4" structure, in response to the GETDEVICEINFO 593 or GETDEVICELIST operations. Note that the opaque "contents" field 594 in the "sigComponent" structure MUST NOT be interpreted as a zero- 595 terminated string, as it may contain embedded zero-valued octets. It 596 contains exactly sig_length octets. There are no restrictions on 597 alignment (e.g., neither sig_offset nor sig_length are required to be 598 multiples of 4). 600 2.4. Crash Recovery Issues 602 When the server crashes while the client holds a writable layout, and 603 the client has written data to blocks covered by the layout, and the 604 blocks are still in the INVALID_DATA state, the client has two 605 options for recovery. If the data that has been written to these 606 blocks is still cached by the client, the client can simply re-write 607 the data via NFSv4, once the server has come back online. However, 608 if the data is no longer in the client's cache, the client MUST NOT 609 attempt to source the data from the data servers. Instead, it should 610 attempt to commit the blocks in question to the server during the 611 server's recovery grace period, by sending a LAYOUTCOMMIT with the 612 "reclaim" flag set to true. This process is described in detail in 613 [NFSv4.1] section 21.42.4. 615 3. Security Considerations 617 Typically, SAN disk arrays and SAN protocols provide access control 618 mechanisms (access-logics, lun masking, etc.) which operate at the 619 granularity of individual hosts. The functionality provided by such 620 mechanisms makes it possible for the server to "fence" individual 621 client machines from certain physical disks---that is to say, to 622 prevent individual client machines from reading or writing to certain 623 physical disks. Finer-grained access control methods are not 624 generally available. For this reason, certain security 625 responsibilities are delegated to pNFS clients for block/volume 626 layouts. Block/volume storage systems generally control access at a 627 volume granularity, and hence pNFS clients have to be trusted to only 628 perform accesses allowed by the layout extents they currently hold 629 (e.g., and not access storage for files on which a layout extent is 630 not held). In general, the server will not be able to prevent a 631 client which holds a layout for a file from accessing parts of the 632 physical disk not covered by the layout. Similarly, the server will 633 not be able to prevent a client from accessing blocks covered by a 634 layout that it has already returned. This block-based level of 635 protection must be provided by the client software. 637 An alternative method of block/volume protocol use is for the storage 638 devices to export virtualized block addresses, which do reflect the 639 files to which blocks belong. These virtual block addresses are 640 exported to pNFS clients via layouts. This allows the storage device 641 to make appropriate access checks, while mapping virtual block 642 addresses to physical block addresses. In environments where the 643 security requirements are such that client-side protection from 644 access to storage outside of the layout is not sufficient pNFS 645 block/volume storage layouts for pNFS SHOULD NOT be used, unless the 646 storage device is able to implement the appropriate access checks, 647 via use of virtualized block addresses, or other means. 649 This also has implications for some NFSv4 functionality outside pNFS. 650 For instance, if a file is covered by a mandatory read-only lock, the 651 server can ensure that only readable layouts for the file are granted 652 to pNFS clients. However, it is up to each pNFS client to ensure 653 that the readable layout is used only to service read requests, and 654 not to allow writes to the existing parts of the file. Since 655 block/volume storage systems are generally not capable of enforcing 656 such file-based security, in environments where pNFS clients cannot 657 be trusted to enforce such policies, pNFS block/volume storage 658 layouts SHOULD NOT be used. 660 Access to block/volume storage is logically at a lower layer of the 661 I/O stack than NFSv4, and hence NFSv4 security is not directly 662 applicable to protocols that access such storage directly. Depending 663 on the protocol, some of the security mechanisms provided by NFSv4 664 (e.g., encryption, cryptographic integrity) may not be available, or 665 may be provided via different means. At one extreme, pNFS with 666 block/volume storage can be used with storage access protocols (e.g., 667 parallel SCSI) that provide essentially no security functionality. 668 At the other extreme, pNFS may be used with storage protocols such as 669 iSCSI that provide significant functionality. It is the 670 responsibility of those administering and deploying pNFS with a 671 block/volume storage access protocol to ensure that appropriate 672 protection is provided to that protocol (physical security is a 673 common means for protocols not based on IP). In environments where 674 the security requirements for the storage protocol cannot be met, 675 pNFS block/volume storage layouts SHOULD NOT be used. 677 When security is available for a storage protocol, it is generally at 678 a different granularity and with a different notion of identity than 679 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 680 initiator access to volumes). The responsibility for enforcing 681 appropriate correspondences between these security layers is placed 682 upon the pNFS client. As with the issues in the first paragraph of 683 this section, in environments where the security requirements are 684 such that client-side protection from access to storage outside of 685 the layout is not sufficient, pNFS block/volume storage layouts 686 SHOULD NOT be used. 688 4. Conclusions 690 This draft specifies the block/volume layout type for pNFS and 691 associated functionality. 693 5. IANA Considerations 695 There are no IANA considerations in this document. All pNFS IANA 696 Considerations are covered in [NFSV4.1]. 698 6. Revision History 700 -00: Initial Version as draft-black-pnfs-block-00 702 -01: Rework discussion of extents as locks to talk about extents 703 granting access permissions. Rewrite operation ordering section to 704 discuss deadlocks and races that can cause problems. Add new section 705 on recall completion. Add client copy-on-write based on text from 706 Craig Everhart. 708 -02: Fix glitches in extent state descriptions. Describe most issues 709 as RESOLVED. Most of Section 3 has been incorporated into the the 710 main PNFD draft, add NOTE to that effect and say that it will be 711 deleted in the next version of this draft (which should be a draft- 712 ietf-nfsv4 draft). Cleaning up a number of things have been left to 713 that draft revision, including the interlocks with the types in the 714 main pNFS draft, layout striping support, and finishing the Security 715 Considerations section. 717 -00: New version as draft-ietf-nfsv4-pnfs-block. Removed resolved 718 operations issues (Section 3). Align types with main pNFS draft 719 (which is now part of the NFSv4.1 minor version draft), add volume 720 striping and slicing support. New operations issues are in Section 3 721 - the need for a "reclaim bit" and EOF concerns are the two major 722 issues. Extended and improved the Security Considerations section, 723 but it still needs work. Added 1-sentence conclusion that also still 724 needs work. 726 -01: Changed definition of pnfs_block_deviceaddr4 union to allow more 727 concise representation of aggregated volume structures. Fixed typos 728 to make both pnfs_block_layoutupdate and pnfs_block_layoutreturn 729 structures contain extent lists instead of a single extent. Updated 730 section 2.1.6 to remove references to CB_SIZECHANGED. Moved 731 description of recovery from "Issues" section to "Block Layout 732 Description" section. Removed section 3.2 "End-of-file handling 733 issues". Merged old "block/volume layout security considerations" 734 section from previous version of [NFSv4.1] with section 4. Moved 735 paragraph on lingering writes to the section which describes layout 736 return. Removed Issues section (3) as the remaining issues are all 737 resolved. 739 7. Acknowledgments 741 This draft draws extensively on the authors' familiarity with the 742 mapping functionality and protocol in EMC's HighRoad system 743 [HighRoad]. The protocol used by HighRoad is called FMP (File 744 Mapping Protocol); it is an add-on protocol that runs in parallel 745 with filesystem protocols such as NFSv3 to provide pNFS-like 746 functionality for block/volume storage. While drawing on HighRoad 747 FMP, the data structures and functional considerations in this draft 748 differ in significant ways, based on lessons learned and the 749 opportunity to take advantage of NFSv4 features such as COMPOUND 750 operations. The design to support pNFS client participation in copy- 751 on-write is based on text and ideas contributed by Craig Everhart 752 (formerly with IBM). 754 8. References 756 8.1. Normative References 758 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 759 Requirement Levels", BCP 14, RFC 2119, March 1997. 761 [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 762 Version 1", draft-ietf-nfsv4-minorversion1-06.txt, Internet 763 Draft, August 2006. 765 8.2. Informative References 767 [HighRoad] EMC Corporation, "EMC Celerra HighRoad", EMC C819.1 white 768 paper, available at: 769 http://www.emc.com/pdf/products/celerra_file_server/HighRoad_wp.pdf 770 link checked 29 August 2006. 772 Author's Addresses 774 David L. Black 775 EMC Corporation 776 176 South Street 777 Hopkinton, MA 01748 779 Phone: +1 (508) 293-7953 780 Email: black_david@emc.com 782 Stephen Fridella 783 EMC Corporation 784 32 Coslin Drive 785 Southboro, MA 01772 787 Phone: +1 (508) 305-8512 788 Email: fridella_stephen@emc.com 790 Intellectual Property Statement 792 The IETF takes no position regarding the validity or scope of any 793 Intellectual Property Rights or other rights that might be claimed to 794 pertain to the implementation or use of the technology described in 795 this document or the extent to which any license under such rights 796 might or might not be available; nor does it represent that it has 797 made any independent effort to identify any such rights. Information 798 on the procedures with respect to rights in RFC documents can be 799 found in BCP 78 and BCP 79. 801 Copies of IPR disclosures made to the IETF Secretariat and any 802 assurances of licenses to be made available, or the result of an 803 attempt made to obtain a general license or permission for the use of 804 such proprietary rights by implementers or users of this 805 specification can be obtained from the IETF on-line IPR repository at 806 http://www.ietf.org/ipr. 808 The IETF invites any interested party to bring to its attention any 809 copyrights, patents or patent applications, or other proprietary 810 rights that may cover technology that may be required to implement 811 this standard. Please address the information to the IETF at ietf- 812 ipr@ietf.org. 814 Disclaimer of Validity 816 This document and the information contained herein are provided on an 817 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 818 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 819 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 820 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 821 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 822 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 824 Copyright Statement 826 Copyright (C) The Internet Society (2006). 828 This document is subject to the rights, licenses and restrictions 829 contained in BCP 78, and except as set forth therein, the authors 830 retain all their rights. 832 Acknowledgment 834 Funding for the RFC Editor function is currently provided by the 835 Internet Society.