idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 862. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 839. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 846. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 852. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 21, 2007) is 6274 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NFSv4 Working Group David L. Black 2 Internet Draft Stephen Fridella 3 Expires: August 2007 Jason Glasgow 4 Intended Status: Proposed Standard EMC Corporation 5 February 21, 2007 7 pNFS Block/Volume Layout 8 draft-ietf-nfsv4-pnfs-block-02.txt 10 Status of this Memo 12 By submitting this Internet-Draft, each author represents that 13 any applicable patent or other IPR claims of which he or she is 14 aware have been or will be disclosed, and any of which he or she 15 becomes aware will be disclosed, in accordance with Section 6 of 16 BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html 34 This Internet-Draft will expire in August 2007. 36 Abstract 38 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 39 file data on the storage used by the NFSv4 server. This ability to 40 bypass the server for data access can increase both performance and 41 parallelism, but requires additional client functionality for data 42 access, some of which is dependent on the class of storage used. The 43 main pNFS operations draft specifies storage-class-independent 44 extensions to NFS; this draft specifies the additional extensions 45 (primarily data structures) for use of pNFS with block and volume 46 based storage. 48 Conventions used in this document 50 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 51 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 52 document are to be interpreted as described in RFC-2119 [RFC2119]. 54 Table of Contents 56 1. Introduction...................................................3 57 2. Block Layout Description.......................................3 58 2.1. Background and Architecture...............................3 59 2.2. Data Structures: Extents and Extent Lists.................4 60 2.2.1. Layout Requests and Extent Lists.....................6 61 2.2.2. Layout Commits.......................................7 62 2.2.3. Layout Returns.......................................8 63 2.2.4. Client Copy-on-Write Processing......................9 64 2.2.5. Extents are Permissions.............................10 65 2.2.6. End-of-file Processing..............................11 66 2.3. Volume Identification....................................12 67 2.4. Crash Recovery Issues....................................15 68 3. Security Considerations.......................................15 69 4. Conclusions...................................................16 70 5. IANA Considerations...........................................17 71 6. Revision History..............................................17 72 7. Acknowledgments...............................................18 73 8. References....................................................18 74 8.1. Normative References.....................................18 75 8.2. Informative References...................................18 76 Author's Addresses...............................................19 77 Intellectual Property Statement..................................19 78 Disclaimer of Validity...........................................20 79 Copyright Statement..............................................20 80 Acknowledgment...................................................20 82 1. Introduction 84 Figure 1 shows the overall architecture of a pNFS system: 86 +-----------+ 87 |+-----------+ +-----------+ 88 ||+-----------+ | | 89 ||| | NFSv4 + pNFS | | 90 +|| Clients |<------------------------------>| Server | 91 +| | | | 92 +-----------+ | | 93 ||| +-----------+ 94 ||| | 95 ||| | 96 ||| +-----------+ | 97 ||| |+-----------+ | 98 ||+----------------||+-----------+ | 99 |+-----------------||| | | 100 +------------------+|| Storage |------------+ 101 +| Systems | 102 +-----------+ 104 Figure 1 pNFS Architecture 106 The overall approach is that pNFS-enhanced clients obtain sufficient 107 information from the server to enable them to access the underlying 108 storage (on the Storage Systems) directly. See the pNFS portion of 109 [NFSV4.1] for more details. This draft is concerned with access from 110 pNFS clients to Storage Systems over storage protocols based on 111 blocks and volumes, such as the SCSI protocol family (e.g., parallel 112 SCSI, FCP for Fibre Channel, iSCSI, SAS). This class of storage is 113 referred to as block/volume storage. While the Server to Storage 114 System protocol is not of concern for interoperability here, it will 115 typically also be a block/volume protocol when clients use block/ 116 volume protocols. 118 2. Block Layout Description 120 2.1. Background and Architecture 122 The fundamental storage abstraction supported by block/volume storage 123 is a storage volume consisting of a sequential series of fixed size 124 blocks. This can be thought of as a logical disk; it may be realized 125 by the Storage System as a physical disk, a portion of a physical 126 disk or something more complex (e.g., concatenation, striping, RAID, 127 and combinations thereof) involving multiple physical disks or 128 portions thereof. 130 A pNFS layout for this block/volume class of storage is responsible 131 for mapping from an NFS file (or portion of a file) to the blocks of 132 storage volumes that contain the file. The blocks are expressed as 133 extents with 64 bit offsets and lengths using the existing NFSv4 134 offset4 and length4 types. Clients must be able to perform I/O to 135 the block extents without affecting additional areas of storage 136 (especially important for writes), therefore extents MUST be aligned 137 to 512-byte boundaries, and SHOULD be aligned to the block size used 138 by the NFSv4 server in managing the actual filesystem (4 kilobytes 139 and 8 kilobytes are common block sizes). This block size is 140 available as an NFSv4 attribute - see Section 11.4 of [NFSV4.1]. 142 The pNFS operation for requesting a layout (LAYOUTGET) includes the 143 "pnfs_layoutiomode4 iomode" argument which indicates whether the 144 requested layout is for read-only use or read-write use. A read-only 145 layout may contain holes that are read as zero, whereas a read-write 146 layout will contain allocated, but uninitialized storage in those 147 holes (read as zero, can be written by client). This draft also 148 supports client participation in copy on write by providing both 149 read-only and uninitialized storage for the same range in a layout. 150 Reads are initially performed on the read-only storage, with writes 151 going to the uninitialized storage. After the first write that 152 initializes the uninitialized storage, all reads are performed to 153 that now-initialized writeable storage, and the corresponding read- 154 only storage is no longer used. 156 2.2. Data Structures: Extents and Extent Lists 158 A pNFS block layout is a list of extents within a flat array of 512- 159 byte data blocks in a storage volume. The details of the volume 160 topology can be determined by using the GETDEVICEINFO or 161 GETDEVICELIST operation (see discussion of volume identification, 162 section 2.3 below). The block layout describes the individual block 163 extents on the volume that make up the file. 165 enum pnfs_block_extent_state4 { 167 READ_WRITE_DATA = 0, /* the data located by this extent is valid 168 for reading and writing. */ 170 READ_DATA = 1, /* the data located by this extent is valid 171 for reading only; it may not be written. 172 */ 174 INVALID_DATA = 2, /* the location is valid; the data is 175 invalid. It is a newly (pre-) allocated 176 extent. There is physical space on the 177 volume. */ 179 NONE_DATA = 3, /* the location is invalid. It is a hole in 180 the file. There is no physical space on 181 the volume. */ 183 }; 185 struct pnfs_block_extent4 { 187 offset4 offset; /* the starting offset in the 188 file */ 190 length4 length; /* the size of the extent */ 192 offset4 storage_offset; /* the starting offset in the 193 volume */ 195 pnfs_block_extent_state4 es; /* the state of this extent */ 197 }; 199 struct pnfs_block_layout4 { 201 deviceid4 volume; /* logical volume on which file 202 is stored. */ 204 pnfs_block_extent4 extents<>; /* extents which make up this 205 layout. */ 207 }; 208 The block layout consists of an identifier of the logical volume on 209 which the file is stored, followed by a list of extents which map the 210 logical regions of the file to physical locations on the volume. The 211 "storage_offset" field within each extent identifies a location on 212 the logical volume described by the "volume" field in the layout. 213 The client is responsible for translating this logical offset into an 214 offset on the appropriate underlying SAN logical unit. 216 Each extent maps a logical region of the file onto a portion of the 217 specified logical volume. The file_offset, extent_length, and es 218 fields for an extent returned from the server are always valid. The 219 interpretation of the storage_offset field depends on the value of es 220 as follows (in increasing order): 222 o READ_WRITE_DATA means that storage_offset is valid, and points to 223 valid/initialized data that can be read and written. 225 o READ_DATA means that storage_offset is valid and points to valid/ 226 initialized data which can only be read. Write operations are 227 prohibited; the client may need to request a read-write layout. 229 o INVALID_DATA means that storage_offset is valid, but points to 230 invalid uninitialized data. This data must not be physically read 231 from the disk until it has been initialized. A read request for 232 an INVALID_DATA extent must fill the user buffer with zeros. Write 233 requests must write whole server-sized blocks to the disk; bytes 234 not initialized by the user must be set to zero. Any write to 235 storage in an INVALID_DATA extent changes the written portion of 236 the extent to READ_WRITE_DATA; the pNFS client is responsible for 237 reporting this change via LAYOUTCOMMIT. 239 o NONE_DATA means that storage_offset is not valid, and this extent 240 may not be used to satisfy write requests. Read requests may be 241 satisfied by zero-filling as for INVALID_DATA. NONE_DATA extents 242 may be returned by requests for readable extents; they are never 243 returned if the request was for a writeable extent. 245 An extent list lists all relevant extents in increasing order of the 246 file_offset of each extent; any ties are broken by increasing order 247 of the extent state (es). 249 2.2.1. Layout Requests and Extent Lists 251 Each request for a layout specifies at least three parameters: 252 offset, desired size, and minimum size. If the status of a request 253 indicates success, the extent list returned must meet the following 254 criteria: 256 o A request for a readable (but not writeable) layout returns only 257 READ_DATA or NONE_DATA extents (but not INVALID_DATA or 258 READ_WRITE_DATA extents). 260 o A request for a writeable layout returns READ_WRITE_DATA or 261 INVALID_DATA extents (but not NONE_DATA extents). It may also 262 return READ_DATA extents only when the offset ranges in those 263 extents are also covered by INVALID_DATA extents to permit writes. 265 o The first extent in the list MUST contain the starting offset. 267 o The total size of extents in the extent list MUST cover at least 268 the minimum size and no more than the desired size. One exception 269 is allowed: the total size MAY be smaller if only readable extents 270 were requested and EOF is encountered. 272 o Extents in the extent list MUST be logically contiguous for a 273 read-only layout. For a read-write layout, the set of writable 274 extents (i.e., excluding READ_DATA extents) MUST be logically 275 contiguous. Every READ_DATA extent in a read-write layout MUST be 276 covered by an INVALID_DATA extent. This overlap of READ_DATA and 277 INVALID_DATA extents is the only permitted extent overlap. 279 o Extents MUST be ordered in the list by starting offset, with 280 READ_DATA extents preceding INVALID_DATA extents in the case of 281 equal file_offsets. 283 2.2.2. Layout Commits 285 struct pnfs_block_layoutupdate4 { 287 pnfs_block_extent4 commit_list<>;/* list of extents to which now 288 contain valid data. */ 290 bool make_version; /* client requests server to 291 create copy-on-write image of 292 this file. */ 294 } 296 The "pnfs_block_layoutupdate4" structure is used by the client as the 297 block-protocol specific argument in a LAYOUTCOMMIT operation. The 298 "commit_list" field is an extent list covering regions of the file 299 layout that were previously in the INVALID_DATA state, but have been 300 written by the client and should now be considered in the 301 READ_WRITE_DATA state. The es field of each extent in the 302 commit_list MUST be set to READ_WRITE_DATA. Implementers should be 303 aware that a server may be unable to commit regions at a granularity 304 smaller than a file-system block (typically 4KB or 8KB). As noted 305 above, the block-size that the server uses is available as an NFSv4 306 attribute, and any extents included in the "commit_list" MUST be 307 aligned to this granularity and have a size that is a multiple of 308 this granularity. If the client believes that its actions have moved 309 the end-of-file into the middle of a block being committed, the 310 client MUST write zeroes from the end-of-file to the end of that 311 block before committing the block. Failure to do so may result in 312 junk (uninitialized data) appearing in that area if the file is 313 subsequently extended by moving the end-of-file. 315 The "make_version" field of the structure is a flag that the client 316 may set to request that the server create a copy-on-write image of 317 the file (pNFS clients may be involved in this operation - see 318 section 2.2.4, below). In anticipation of this operation the client 319 which sets the "make_version" flag in the LAYOUTCOMMIT operation 320 should immediately mark all extents in the layout that is possesses 321 as state READ_DATA. Future writes to the file require a new 322 LAYOUTGET operation to the server with an "iomode" set to 323 LAYOUTIOMODE_RW. 325 2.2.3. Layout Returns 327 struct pnfs_block_layoutreturn4 { 329 pnfs_block_extent4 rel_list<>; /* list of extents the client 330 will no longer use. */ 332 } 334 The "rel_list" field is an extent list covering regions of the file 335 layout that are no longer needed by the client. Including extents in 336 the "rel_list" for a LAYOUTRETURN operation represents an explicit 337 release of resources by the client, usually done for the purpose of 338 avoiding unnecessary CB_LAYOUTRECALL operations in the future. 340 Note that the block/volume layout supports unilateral layout 341 revocation. When a layout is unilaterally revoked by the server, 342 usually due to the client's lease timer expiring or the client 343 failing to return a layout in a timely manner, it is important for 344 the sake of correctness that any in-flight I/Os that the client 345 issued before the layout was revoked are rejected at the storage. 346 For the block/volume protocol, this is possible by fencing a client 347 with an expired layout timer from the physical storage. Note, 348 however, that the granularity of this operation can only be at the 349 host/logical-unit level. Thus, if one of a client's layouts is 350 unilaterally revoked by the server, it will effectively render 351 useless *all* of the client's layouts for files located on the 352 storage units comprising the logical volume. This may render useless 353 the client's layouts for files in other filesystems. 355 2.2.4. Client Copy-on-Write Processing 357 Distinguishing the READ_WRITE_DATA and READ_DATA extent types in 358 combination with the allowed overlap of READ_DATA extents with 359 INVALID_DATA extents allows copy-on-write processing to be done by 360 pNFS clients. In classic NFS, this operation would be done by the 361 server. Since pNFS enables clients to do direct block access, it is 362 useful for clients to participate in copy-on-write operations. All 363 block/volume pNFS clients MUST support this copy-on-write processing. 365 When a client wishes to write data covered by a READ_DATA extent, it 366 MUST have requested a writable layout from the server; that layout 367 will contain INVALID_DATA extents to cover all the data ranges of 368 that layout's READ_DATA extents. More precisely, for any file_offset 369 range covered by one or more READ_DATA extents in a writable layout, 370 the server MUST include one or more INVALID_DATA extents in the 371 layout that cover the same file_offset range. When performing a write 372 to such an area of a layout, the client MUST effectively copy the 373 data from the READ_DATA extent for any partial blocks of file_offset 374 and range, merge in the changes to be written, and write the result 375 to the INVALID_DATA extent for the blocks for that file_offset and 376 range. That is, if entire blocks of data are to be overwritten by an 377 operation, the corresponding READ_DATA blocks need not be fetched, 378 but any partial-block writes must be merged with data fetched via 379 READ_DATA extents before storing the result via INVALID_DATA extents. 380 For the purposes of this discussion, "entire blocks" and "partial 381 blocks" refer to the server's file-system block size. Storing of 382 data in an INVALID_DATA extent converts the written portion of the 383 INVALID_DATA extent to a READ_WRITE_DATA extent; all subsequent reads 384 MUST be performed from this extent; the corresponding portion of the 385 READ_DATA extent MUST NOT be used after storing data in an 386 INVALID_DATA extent. 388 In the LAYOUTCOMMIT operation that normally sends updated layout 389 information back to the server, for writable data, some INVALID_DATA 390 extents may be committed as READ_WRITE_DATA extents, signifying that 391 the storage at the corresponding storage_offset values has been 392 stored into and is now to be considered as valid data to be read. 393 READ_DATA extents are not committed to the server. For extents that 394 the client receives via LAYOUTGET as INVALID_DATA and returns via 395 LAYOUTCOMMIT as READ_WRITE_DATA, the server will understand that the 396 READ_DATA mapping for that extent is no longer valid or necessary for 397 that file. 399 2.2.5. Extents are Permissions 401 Layout extents returned to pNFS clients grant permission to read or 402 write; READ_DATA and NONE_DATA are read-only (NONE_DATA reads as 403 zeroes), READ_WRITE_DATA and INVALID_DATA are read/write, 404 (INVALID_DATA reads as zeros, any write converts it to 405 READ_WRITE_DATA). This is the only client means of obtaining 406 permission to perform direct I/O to storage devices; a pNFS client 407 MUST NOT perform direct I/O operations that are not permitted by an 408 extent held by the client. Client adherence to this rule places the 409 pNFS server in control of potentially conflicting storage device 410 operations, enabling the server to determine what does conflict and 411 how to avoid conflicts by granting and recalling extents to/from 412 clients. 414 Block/volume class storage devices are not required to perform read 415 and write operations atomically. Overlapping concurrent read and 416 write operations to the same data may cause the read to return a 417 mixture of before-write and after-write data. Overlapping write 418 operations can be worse, as the result could be a mixture of data 419 from the two write operations; data corruption can occur if the 420 underlying storage is striped and the operations complete in 421 different orders on different stripes. A pNFS server can avoid these 422 conflicts by implementing a single writer XOR multiple readers 423 concurrency control policy when there are multiple clients who wish 424 to access the same data. This policy SHOULD be implemented when 425 storage devices do not provide atomicity for concurrent read/write 426 and write/write operations to the same data. 428 If a client makes a layout request that conflicts with an existing 429 layout delegation, the request will be rejected with the error 430 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 431 request after a short interval. During this interval the server 432 SHOULD recall the conflicting portion of the layout delegation from 433 the client that currently holds it. This reject-and-retry approach 434 does not prevent client starvation when there is contention for the 435 layout of a particular file. For this reason a pNFS server SHOULD 436 implement a mechanism to prevent starvation. One possibility is that 437 the server can maintain a queue of rejected layout requests. Each 438 new layout request can be checked to see if it conflicts with a 439 previous rejected request, and if so, the newer request can be 440 rejected. Once the original requesting client retries its request, 441 its entry in the rejected request queue can be cleared, or the entry 442 in the rejected request queue can be removed when it reaches a 443 certain age. 445 NFSv4 supports mandatory locks and share reservations. These are 446 mechanisms that clients can use to restrict the set of I/O operations 447 that are permissible to other clients. Since all I/O operations 448 ultimately arrive at the NFSv4 server for processing, the server is 449 in a position to enforce these restrictions. However, with pNFS 450 layout delegations, I/Os will be issued from the clients that hold 451 the delegations directly to the storage devices that host the data. 452 These devices have no knowledge of files, mandatory locks, or share 453 reservations, and are not in a position to enforce such restrictions. 454 For this reason the NFSv4 server MUST NOT grant layout delegations 455 that conflict with mandatory locks or share reservations. Further, 456 if a conflicting mandatory lock request or a conflicting open request 457 arrives at the server, the server MUST recall the part of the layout 458 delegation in conflict with the request before granting the request. 460 2.2.6. End-of-file Processing 462 The end-of-file location can be changed in two ways: implicitly as 463 the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 464 or explicitly as the result of a SETATTR request. Typically, when a 465 file is truncated by an NFSv4 client via the SETATTR call, the server 466 frees any disk blocks belonging to the file which are beyond the new 467 end-of-file byte, and may write zeros to the portion of the new end- 468 of-file block beyond the new end-of-file byte. These actions render 469 any pNFS layouts which refer to the blocks that are freed or written 470 semantically invalid. Therefore, the server MUST recall from clients 471 the portions of any pNFS layouts which refer to blocks that will be 472 freed or written by the server before processing the truncate 473 request. These recalls may take time to complete; as explained in 474 [NFSv4.1], if the server cannot respond to the client SETATTR request 475 in a reasonable amount of time, it SHOULD reply to the client with 476 the error NFS4ERR_DELAY. 478 Blocks in the INVALID_DATA state which lie beyond the new end-of-file 479 block present a special case. The server has reserved these blocks 480 for use by a pNFS client with a writable layout for the file, but the 481 client has yet to commit the blocks, and they are not yet a part of 482 the file mapping on disk. The server MAY free these blocks while 483 processing the SETATTR request. If so, the server MUST recall any 484 layouts from pNFS clients which refer to the blocks before processing 485 the truncate. If the server does not free the INVALID_DATA blocks 486 while processing the SETATTR request, it need not recall layouts 487 which refer only to the INVALID DATA blocks. 489 When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 490 the current end-of-file, or extended explicitly by a SETATTR request, 491 the server need not recall any portions of any pNFS layouts. 493 2.3. Volume Identification 495 Storage Systems such as storage arrays can have multiple physical 496 network ports that need not be connected to a common network, 497 resulting in a pNFS client having simultaneous multipath access to 498 the same storage volumes via different ports on different networks. 499 The networks may not even be the same technology - for example, 500 access to the same volume via both iSCSI and Fibre Channel is 501 possible, hence network address are difficult to use for volume 502 identification. For this reason, this pNFS block layout identifies 503 storage volumes by content, for example providing the means to match 504 (unique portions of) labels used by volume managers. Any block pNFS 505 system using this layout MUST support a means of content-based unique 506 volume identification that can be employed via the data structure 507 given here. 509 struct sigComponent { /* disk signature component */ 511 int64_t sig_offset; /* byte offset of component 512 from start of volume if positive 513 from end of volume if negative */ 515 length4 sig_length; /* byte length of component */ 517 opaque contents<>; /* contents of this component of the 518 signature (this is opaque) */ 520 }; 522 enum pnfs_block_volume_type4 { 524 VOLUME_SIMPLE = 0, /* volume maps to a single LU */ 526 VOLUME_SLICE = 1, /* volume is a slice of another volume */ 528 VOLUME_CONCAT = 2, /* volume is a concatenation of multiple 529 volumes */ 531 VOLUME_STRIPE = 3, /* volume is striped across multiple 532 volumes */ 534 }; 536 struct pnfs_block_slice_volume_info4 { 538 offset4 start; /* block-offset of the start of the 539 slice */ 541 length4 length; /* length of slice in blocks */ 543 deviceid4 volume; /* volume which is sliced */ 545 }; 547 struct pnfs_block_concat_volume_info4 { 549 deviceid4 volumes<>; /* volumes which are concatenated */ 551 }; 553 struct pnfs_block_stripe_volume_info4 { 555 length4 stripe_unit; /* size of stripe */ 557 deviceid4 volumes<>; /* volumes which are striped 558 across*/ 560 }; 562 union pnfs_block_deviceaddr4 switch (pnfs_block_volume_type4 type) { 564 case VOLUME_SIMPLE: 566 pnfs_block_sig_component4 ds; /* 567 disk signature */ 569 case VOLUME_SLICE: 571 pnfs_block_slice_volume_info4 slice_info; 573 case VOLUME_CONCAT: 575 pnfs_block_concat_volume_info4 concat_info; 577 case VOLUME_STRIPE: 579 pnfs_block_stripe_volume_info4 stripe_info; 581 default: 583 void; 585 }; 587 The "pnfs_block_deviceaddr4" union is a recursive structure that 588 allows arbitrarily complex nested volume structures to be encoded. 589 The types of aggregations that are allowed are stripes, 590 concatenations, and slices. The base case is a volume which maps 591 simply to one logical unit in the SAN, identified by the 592 "sigComponent" structure. Each SAN logical unit is content- 593 identified by a disk signature made up of extents within blocks and 594 contents that must match. The "pnfs_block_deviceaddr4" union is 595 returned by the server as the storage-protocol-specific opaque field 596 in the "pnfs_deviceaddr4" structure, in response to the GETDEVICEINFO 597 or GETDEVICELIST operations. Note that the opaque "contents" field 598 in the "sigComponent" structure MUST NOT be interpreted as a zero- 599 terminated string, as it may contain embedded zero-valued octets. It 600 contains exactly sig_length octets. There are no restrictions on 601 alignment (e.g., neither sig_offset nor sig_length are required to be 602 multiples of 4). The sig_offset is a signed quantity which when 603 positive represents an offset from the start of the volume, and when 604 negative represents an offset from the end of the volume. 606 Negative offsets are permitted in order to simplify the client 607 implementation on systems where the device label is found at a fixed 608 offset from the end of the volume. In the absence of a negative 609 offset, imagine a system where the client has access to n volumes and 610 a file system is striped across m volumes. If those m disks are all 611 different sizes, then in the worst case, the client would need to 612 read n times m blocks in order to properly identify the volumes used 613 by a layout. If the server uses negative offsets to describe the 614 signature, then the client and server MUST NOT see different volume 615 sizes. Negative offsets SHOULD NOT be used in systems that 616 dynamically resize volumes unless care is taken to ensure that the 617 device label is always present at the offset from the end of the 618 volume as seen by the clients. 620 2.4. Crash Recovery Issues 622 When the server crashes while the client holds a writable layout, and 623 the client has written data to blocks covered by the layout, and the 624 blocks are still in the INVALID_DATA state, the client has two 625 options for recovery. If the data that has been written to these 626 blocks is still cached by the client, the client can simply re-write 627 the data via NFSv4, once the server has come back online. However, 628 if the data is no longer in the client's cache, the client MUST NOT 629 attempt to source the data from the data servers. Instead, it should 630 attempt to commit the blocks in question to the server during the 631 server's recovery grace period, by sending a LAYOUTCOMMIT with the 632 "reclaim" flag set to true. This process is described in detail in 633 [NFSv4.1] section 21.42.4. 635 3. Security Considerations 637 Typically, SAN disk arrays and SAN protocols provide access control 638 mechanisms (access-logics, lun masking, etc.) which operate at the 639 granularity of individual hosts. The functionality provided by such 640 mechanisms makes it possible for the server to "fence" individual 641 client machines from certain physical disks---that is to say, to 642 prevent individual client machines from reading or writing to certain 643 physical disks. Finer-grained access control methods are not 644 generally available. For this reason, certain security 645 responsibilities are delegated to pNFS clients for block/volume 646 layouts. Block/volume storage systems generally control access at a 647 volume granularity, and hence pNFS clients have to be trusted to only 648 perform accesses allowed by the layout extents they currently hold 649 (e.g., and not access storage for files on which a layout extent is 650 not held). In general, the server will not be able to prevent a 651 client which holds a layout for a file from accessing parts of the 652 physical disk not covered by the layout. Similarly, the server will 653 not be able to prevent a client from accessing blocks covered by a 654 layout that it has already returned. This block-based level of 655 protection must be provided by the client software. 657 An alternative method of block/volume protocol use is for the storage 658 devices to export virtualized block addresses, which do reflect the 659 files to which blocks belong. These virtual block addresses are 660 exported to pNFS clients via layouts. This allows the storage device 661 to make appropriate access checks, while mapping virtual block 662 addresses to physical block addresses. In environments where the 663 security requirements are such that client-side protection from 664 access to storage outside of the layout is not sufficient pNFS 665 block/volume storage layouts for pNFS SHOULD NOT be used, unless the 666 storage device is able to implement the appropriate access checks, 667 via use of virtualized block addresses, or other means. 669 This also has implications for some NFSv4 functionality outside pNFS. 670 For instance, if a file is covered by a mandatory read-only lock, the 671 server can ensure that only readable layouts for the file are granted 672 to pNFS clients. However, it is up to each pNFS client to ensure 673 that the readable layout is used only to service read requests, and 674 not to allow writes to the existing parts of the file. Since 675 block/volume storage systems are generally not capable of enforcing 676 such file-based security, in environments where pNFS clients cannot 677 be trusted to enforce such policies, pNFS block/volume storage 678 layouts SHOULD NOT be used. 680 Access to block/volume storage is logically at a lower layer of the 681 I/O stack than NFSv4, and hence NFSv4 security is not directly 682 applicable to protocols that access such storage directly. Depending 683 on the protocol, some of the security mechanisms provided by NFSv4 684 (e.g., encryption, cryptographic integrity) may not be available, or 685 may be provided via different means. At one extreme, pNFS with 686 block/volume storage can be used with storage access protocols (e.g., 687 parallel SCSI) that provide essentially no security functionality. 688 At the other extreme, pNFS may be used with storage protocols such as 689 iSCSI that provide significant functionality. It is the 690 responsibility of those administering and deploying pNFS with a 691 block/volume storage access protocol to ensure that appropriate 692 protection is provided to that protocol (physical security is a 693 common means for protocols not based on IP). In environments where 694 the security requirements for the storage protocol cannot be met, 695 pNFS block/volume storage layouts SHOULD NOT be used. 697 When security is available for a storage protocol, it is generally at 698 a different granularity and with a different notion of identity than 699 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 700 initiator access to volumes). The responsibility for enforcing 701 appropriate correspondences between these security layers is placed 702 upon the pNFS client. As with the issues in the first paragraph of 703 this section, in environments where the security requirements are 704 such that client-side protection from access to storage outside of 705 the layout is not sufficient, pNFS block/volume storage layouts 706 SHOULD NOT be used. 708 4. Conclusions 710 This draft specifies the block/volume layout type for pNFS and 711 associated functionality. 713 5. IANA Considerations 715 There are no IANA considerations in this document. All pNFS IANA 716 Considerations are covered in [NFSV4.1]. 718 6. Revision History 720 -00: Initial Version as draft-black-pnfs-block-00 722 -01: Rework discussion of extents as locks to talk about extents 723 granting access permissions. Rewrite operation ordering section to 724 discuss deadlocks and races that can cause problems. Add new section 725 on recall completion. Add client copy-on-write based on text from 726 Craig Everhart. 728 -02: Fix glitches in extent state descriptions. Describe most issues 729 as RESOLVED. Most of Section 3 has been incorporated into the the 730 main PNFD draft, add NOTE to that effect and say that it will be 731 deleted in the next version of this draft (which should be a draft- 732 ietf-nfsv4 draft). Cleaning up a number of things have been left to 733 that draft revision, including the interlocks with the types in the 734 main pNFS draft, layout striping support, and finishing the Security 735 Considerations section. 737 -00: New version as draft-ietf-nfsv4-pnfs-block. Removed resolved 738 operations issues (Section 3). Align types with main pNFS draft 739 (which is now part of the NFSv4.1 minor version draft), add volume 740 striping and slicing support. New operations issues are in Section 3 741 - the need for a "reclaim bit" and EOF concerns are the two major 742 issues. Extended and improved the Security Considerations section, 743 but it still needs work. Added 1-sentence conclusion that also still 744 needs work. 746 -01: Changed definition of pnfs_block_deviceaddr4 union to allow more 747 concise representation of aggregated volume structures. Fixed typos 748 to make both pnfs_block_layoutupdate and pnfs_block_layoutreturn 749 structures contain extent lists instead of a single extent. Updated 750 section 2.1.6 to remove references to CB_SIZECHANGED. Moved 751 description of recovery from "Issues" section to "Block Layout 752 Description" section. Removed section 3.2 "End-of-file handling 753 issues". Merged old "block/volume layout security considerations" 754 section from previous version of [NFSv4.1] with section 4. Moved 755 paragraph on lingering writes to the section which describes layout 756 return. Removed Issues section (3) as the remaining issues are all 757 resolved. 759 02: Changed pnfs_deviceaddr4 to deviceaddr4 to match [NFSv4.1]. 760 Updated section 2.2.2 to clarify that the es fields must be 761 READ_WRITE_DATA in pnfs_block_layoutupdate requests. Updated section 762 2.2.5 to specify that data corruption can occur; that requests, not 763 the client, are rejected; that server "SHOULD" recall conflicting 764 portions of layouts. Clarified that unilateral revocation may affect 765 layouts from other filesystems. Changed signature offset to be a 766 signed quantity to allow for labels at a fixed location from the end 767 of a volume. Changed all data structures to have suffix "4", changed 768 extentState4 to pnfs_block_extent_state4 and sigComponent to 769 pnfs_block_sig_component4, to conform to [NFSv4.1]. 771 7. Acknowledgments 773 This draft draws extensively on the authors' familiarity with the 774 mapping functionality and protocol in EMC's HighRoad system 775 [HighRoad]. The protocol used by HighRoad is called FMP (File 776 Mapping Protocol); it is an add-on protocol that runs in parallel 777 with filesystem protocols such as NFSv3 to provide pNFS-like 778 functionality for block/volume storage. While drawing on HighRoad 779 FMP, the data structures and functional considerations in this draft 780 differ in significant ways, based on lessons learned and the 781 opportunity to take advantage of NFSv4 features such as COMPOUND 782 operations. The design to support pNFS client participation in copy- 783 on-write is based on text and ideas contributed by Craig Everhart 784 (formerly with IBM). 786 8. References 788 8.1. Normative References 790 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 791 Requirement Levels", BCP 14, RFC 2119, March 1997. 793 [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 794 Version 1", draft-ietf-nfsv4-minorversion1-08.txt, Internet 795 Draft, October 2006. 797 8.2. Informative References 799 [HighRoad] EMC Corporation, "EMC Celerra HighRoad", EMC C819.1 white 800 paper, available at: 801 http://www.emc.com/pdf/products/celerra_file_server/HighRoad_wp.pdf 802 link checked 29 August 2006. 804 Author's Addresses 806 David L. Black 807 EMC Corporation 808 176 South Street 809 Hopkinton, MA 01748 811 Phone: +1 (508) 293-7953 812 Email: black_david@emc.com 814 Stephen Fridella 815 EMC Corporation 816 228 South Street 817 Hopkinton, MA 01748 819 Phone: +1 (508) 249-3528 820 Email: fridella_stephen@emc.com 822 Jason Glasgow 823 EMC Corporation 824 32 Coslin Drive 825 Southboro, MA 01772 827 Phone: +1 (508) 305 8831 828 Email: glasgow_jason@emc.com 830 Intellectual Property Statement 832 The IETF takes no position regarding the validity or scope of any 833 Intellectual Property Rights or other rights that might be claimed to 834 pertain to the implementation or use of the technology described in 835 this document or the extent to which any license under such rights 836 might or might not be available; nor does it represent that it has 837 made any independent effort to identify any such rights. Information 838 on the procedures with respect to rights in RFC documents can be 839 found in BCP 78 and BCP 79. 841 Copies of IPR disclosures made to the IETF Secretariat and any 842 assurances of licenses to be made available, or the result of an 843 attempt made to obtain a general license or permission for the use of 844 such proprietary rights by implementers or users of this 845 specification can be obtained from the IETF on-line IPR repository at 846 http://www.ietf.org/ipr. 848 The IETF invites any interested party to bring to its attention any 849 copyrights, patents or patent applications, or other proprietary 850 rights that may cover technology that may be required to implement 851 this standard. Please address the information to the IETF at ietf- 852 ipr@ietf.org. 854 Disclaimer of Validity 856 This document and the information contained herein are provided on an 857 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 858 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 859 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 860 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 861 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 862 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 864 Copyright Statement 866 Copyright (C) The IETF Trust (2007). 868 This document is subject to the rights, licenses and restrictions 869 contained in BCP 78, and except as set forth therein, the authors 870 retain all their rights. 872 Acknowledgment 874 Funding for the RFC Editor function is currently provided by the 875 Internet Society.