idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1207. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1184. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1191. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1197. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 293 has weird spacing: '... opaque bsc_c...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 9, 2008) is 5609 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. 'LEGAL' Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 Working Group D. Black 3 Internet Draft S. Fridella 4 Expires: June 12, 2009 J. Glasgow 5 Intended Status: Proposed Standard EMC Corporation 6 December 9, 2008 8 pNFS Block/Volume Layout 9 draft-ietf-nfsv4-pnfs-block-11.txt 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that 14 any applicable patent or other IPR claims of which he or she is 15 aware have been or will be disclosed, and any of which he or she 16 becomes aware will be disclosed, in accordance with Section 6 of 17 BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html 35 This Internet-Draft will expire in May 2009. 37 Abstract 39 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 40 file data on the storage used by the NFSv4 server. This ability to 41 bypass the server for data access can increase both performance and 42 parallelism, but requires additional client functionality for data 43 access, some of which is dependent on the class of storage used. The 44 main pNFS operations draft specifies storage-class-independent 45 extensions to NFS; this draft specifies the additional extensions 46 (primarily data structures) for use of pNFS with block and volume 47 based storage. 49 Conventions used in this document 51 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 52 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 53 document are to be interpreted as described in RFC-2119 [RFC2119]. 55 Table of Contents 57 1. Introduction.................................................. 3 58 1.1. General Definitions ..................................... 3 59 1.2. XDR Description of NFSv4.1 block layout.................. 4 60 2. Block Layout Description ..................................... 5 61 2.1. Background and Architecture ............................. 5 62 2.2. GETDEVICELIST and GETDEVICEINFO.......................... 7 63 2.2.1. Volume Identification............................... 7 64 2.2.2. Volume Topology..................................... 8 65 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4...........11 66 2.3. Data Structures: Extents and Extent Lists................11 67 2.3.1. Layout Requests and Extent Lists....................14 68 2.3.2. Layout Commits .....................................15 69 2.3.3. Layout Returns .....................................16 70 2.3.4. Client Copy-on-Write Processing.....................16 71 2.3.5. Extents are Permissions.............................18 72 2.3.6. End-of-file Processing .............................19 73 2.3.7. Layout Hints........................................20 74 2.3.8. Client Fencing .....................................20 75 2.4. Crash Recovery Issues....................................22 76 2.5. Recalling resources: CB_RECALL_ANY ......................23 77 2.6. Transient and Permanent Errors...........................23 78 3. Security Considerations.......................................24 79 4. Conclusions...................................................26 80 5. IANA Considerations...........................................26 81 6. Acknowledgments...............................................26 82 7. References....................................................26 83 7.1. Normative References.....................................26 84 7.2. Informative References...................................27 85 Author's Addresses...............................................27 86 Intellectual Property Statement..................................27 87 Disclaimer of Validity...........................................28 88 Copyright Statement .............................................28 89 Acknowledgment...................................................28 91 1. Introduction 93 Figure 1 shows the overall architecture of a Parallel NFS (pNFS) 94 system: 96 +-----------+ 97 |+-----------+ +-----------+ 98 ||+-----------+ | | 99 ||| | NFSv4.1 + pNFS | | 100 +|| Clients |<------------------------------>| Server | 101 +| | | | 102 +-----------+ | | 103 ||| +-----------+ 104 ||| | 105 ||| | 106 ||| Storage +-----------+ | 107 ||| Protocol |+-----------+ | 108 ||+----------------||+-----------+ Control | 109 |+-----------------||| | Protocol| 110 +------------------+|| Storage |------------+ 111 +| Systems | 112 +-----------+ 114 Figure 1 pNFS Architecture 116 The overall approach is that pNFS-enhanced clients obtain sufficient 117 information from the server to enable them to access the underlying 118 storage (on the Storage Systems) directly. See the pNFS portion of 119 [NFSV4.1] for more details. This draft is concerned with access from 120 pNFS clients to Storage Systems over storage protocols based on 121 blocks and volumes, such as the SCSI protocol family (e.g., parallel 122 SCSI, FCP for Fibre Channel, iSCSI, SAS, and FCoE). This class of 123 storage is referred to as block/volume storage. While the Server to 124 Storage System protocol, called the "Control Protocol", is not of 125 concern for interoperability here, it will typically also be a 126 block/volume protocol when clients use block/ volume protocols. 128 1.1. General Definitions 130 The following definitions are provided for the purpose of providing 131 an appropriate context for the reader. 133 Byte 134 This document defines a byte as an octet, i.e. a datum exactly 8 135 bits in length. 137 Client 139 The "client" is the entity that accesses the NFS server's 140 resources. The client may be an application which contains the 141 logic to access the NFS server directly. The client may also be 142 the traditional operating system client that provides remote file 143 system services for a set of applications. 145 Server 147 The "Server" is the entity responsible for coordinating client 148 access to a set of file systems and is identified by a Server 149 owner. 151 1.2. Code Components Licensing Notice 153 The external data representation (XDR) description and scripts for 154 extracting the XDR description are Code Components as described in 155 Section 4 of "Legal Provisions Relating to IETF Documents" [LEGAL]. 156 These Code Components are licensed according to the terms of Section 157 4 of "Legal Provisions Relating to IETF Documents". 159 1.3. XDR Description 161 This document contains the XDR ([XDR]) description of the NFSv4.1 162 block layout protocol. The XDR description is embedded in this 163 document in a way that makes it simple for the reader to extract into 164 a ready to compile form. The reader can feed this document into the 165 following shell script to produce the machine readable XDR 166 description of the NFSv4.1 block layout: 168 #!/bin/sh 169 grep '^ *///' | sed 's?^ */// ??' | sed 's?^ *///$??' $* 171 I.e. if the above script is stored in a file called "extract.sh", and 172 this document is in a file called "spec.txt", then the reader can do: 174 sh extract.sh < spec.txt > nfs4_block_layout_spec.x 176 The effect of the script is to remove both leading white space and a 177 sentinel sequence of "///" from each matching line. 179 The embedded XDR file header follows, with subsequent pieces embedded 180 throughout the document: 182 /// /* 183 /// * This code was derived from IETF RFC &rfc.number. 184 [[RFC Editor: please insert RFC number if needed]] 185 /// * Please reproduce this note if possible. 186 /// */ 187 /// 188 /// /* 189 /// * nfs4_block_layout_prot.x 190 /// */ 191 /// 192 /// %#include "nfsv41.h" 193 /// 195 The XDR code contained in this document depends on types from 196 nfsv41.x file. This includes both nfs types that end with a 4, such 197 as offset4, length4, etc, as well as more generic types such as 198 uint32_t and uint64_t. 200 2. Block Layout Description 202 2.1. Background and Architecture 204 The fundamental storage abstraction supported by block/volume storage 205 is a storage volume consisting of a sequential series of fixed size 206 blocks. This can be thought of as a logical disk; it may be realized 207 by the Storage System as a physical disk, a portion of a physical 208 disk or something more complex (e.g., concatenation, striping, RAID, 209 and combinations thereof) involving multiple physical disks or 210 portions thereof. 212 A pNFS layout for this block/volume class of storage is responsible 213 for mapping from an NFS file (or portion of a file) to the blocks of 214 storage volumes that contain the file. The blocks are expressed as 215 extents with 64 bit offsets and lengths using the existing NFSv4 216 offset4 and length4 types. Clients must be able to perform I/O to 217 the block extents without affecting additional areas of storage 218 (especially important for writes), therefore extents MUST be aligned 219 to 512-byte boundaries, and writable extents MUST be aligned to the 220 block size used by the NFSv4 server in managing the actual file 221 system (4 kilobytes and 8 kilobytes are common block sizes). This 222 block size is available as the NFSv4.1 layout_blksize attribute. 224 [NFSV4.1]. Readable extents SHOULD be aligned to the block size used 225 by the NFSv4 server, but in order to support legacy file systems with 226 fragments, alignment to 512 byte boundaries is acceptable. 228 The pNFS operation for requesting a layout (LAYOUTGET) includes the 229 "layoutiomode4 loga_iomode" argument which indicates whether the 230 requested layout is for read-only use or read-write use. A read-only 231 layout may contain holes that are read as zero, whereas a read-write 232 layout will contain allocated, but un-initialized storage in those 233 holes (read as zero, can be written by client). This draft also 234 supports client participation in copy on write (e.g. for file systems 235 with snapshots) by providing both read-only and un-initialized 236 storage for the same range in a layout. Reads are initially 237 performed on the read-only storage, with writes going to the un- 238 initialized storage. After the first write that initializes the un- 239 initialized storage, all reads are performed to that now-initialized 240 writeable storage, and the corresponding read-only storage is no 241 longer used. 243 The block/volume layout solution expands the security 244 responsibilities of the pNFS clients and there are a number of 245 environments where the mandatory to implement security properties for 246 NFS cannot be satisfied. The additional security responsibilities of 247 the client follow, and a full discussion is present in Section 3 248 "Security Considerations". 250 o Typically, storage array network (SAN) disk arrays and SAN 251 protocols provide access control mechanisms (e.g., logical unit 252 number mapping and/or masking) which operate at the granularity of 253 individual hosts, not individual blocks. For this reason, block- 254 based protection must be provided by the client software. 256 o Similarly, SAN disk arrays and SAN protocols typically are not be 257 able to validate NFS locks that apply to file regions. For 258 instance, if a file is covered by a mandatory read-only lock, the 259 server can ensure that only readable layouts for the file are 260 granted to pNFS clients. However, it is up to each pNFS client to 261 ensure that the readable layout is used only to service read 262 requests, and not to allow writes to the existing parts of the 263 file. 265 Since block/volume storage systems are generally not capable of 266 enforcing such file-based security, in environments where pNFS 267 clients cannot be trusted to enforce such policies, pNFS block/volume 268 storage layouts SHOULD NOT be used. 270 2.2. GETDEVICELIST and GETDEVICEINFO 272 2.2.1. Volume Identification 274 Storage Systems such as storage arrays can have multiple physical 275 network ports that need not be connected to a common network, 276 resulting in a pNFS client having simultaneous multipath access to 277 the same storage volumes via different ports on different networks. 278 The networks may not even be the same technology - for example, 279 access to the same volume via both iSCSI and Fibre Channel is 280 possible, hence network addresses are difficult to use for volume 281 identification. For this reason, this pNFS block layout identifies 282 storage volumes by content, for example providing the means to match 283 (unique portions of) labels used by volume managers. Volume 284 identification is performed by matching one or more opaque byte 285 sequences to specific parts of the stored data. Any block pNFS 286 system using this layout MUST support a means of content-based unique 287 volume identification that can be employed via the data structure 288 given here. 290 /// struct pnfs_block_sig_component4 { /* disk signature component */ 291 /// int64_t bsc_sig_offset; /* byte offset of component 292 /// on volume*/ 293 /// opaque bsc_contents<>; /* contents of this component 294 /// of the signature */ 295 /// }; 296 /// 298 Note that the opaque "bsc_contents" field in the 299 "pnfs_block_sig_component4" structure MUST NOT be interpreted as a 300 zero-terminated string, as it may contain embedded zero-valued bytes. 301 There are no restrictions on alignment (e.g., neither bsc_sig_offset 302 nor the length are required to be multiples of 4). The 303 bsc_sig_offset is a signed quantity which when positive represents an 304 byte offset from the start of the volume, and when negative 305 represents an byte offset from the end of the volume. 307 Negative offsets are permitted in order to simplify the client 308 implementation on systems where the device label is found at a fixed 309 offset from the end of the volume. If the server uses negative 310 offsets to describe the signature, then the client and server MUST 311 NOT see different volume sizes. Negative offsets SHOULD NOT be used 312 in systems that dynamically resize volumes unless care is taken to 313 ensure that the device label is always present at the offset from the 314 end of the volume as seen by the clients. 316 A signature is an array of up to "PNFS_BLOCK_MAX_SIG_COMP" (defined 317 below) signature components. The client MUST NOT assume that all 318 signature components are colocated within a single sector on a block 319 device. 321 The pNFS client block layout driver uses this volume identification 322 to map pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE deviceid4s to 323 its local view of a logical unit number (LUN). 325 2.2.2. Volume Topology 327 The pNFS block server volume topology is expressed as an arbitrary 328 combination of base volume types enumerated in the following data 329 structures. The individual components of the topology are contained 330 in an array and components may refer to other components by using 331 array indices. 333 /// enum pnfs_block_volume_type4 { 334 /// PNFS_BLOCK_VOLUME_SIMPLE = 0, /* volume maps to a single 335 /// LU */ 336 /// PNFS_BLOCK_VOLUME_SLICE = 1, /* volume is a slice of 337 /// another volume */ 338 /// PNFS_BLOCK_VOLUME_CONCAT = 2, /* volume is a 339 /// concatenation of 340 /// multiple volumes */ 341 /// PNFS_BLOCK_VOLUME_STRIPE = 3 /* volume is striped across 342 /// multiple volumes */ 343 /// }; 344 /// 345 /// const PNFS_BLOCK_MAX_SIG_COMP = 16;/* maximum components per 346 /// signature */ 347 /// struct pnfs_block_simple_volume_info4 { 348 /// pnfs_block_sig_component4 bsv_ds; 349 /// /* disk signature */ 350 /// }; 351 /// 352 /// 353 /// struct pnfs_block_slice_volume_info4 { 354 /// offset4 bsv_start; /* offset of the start of the 355 /// slice in bytes */ 356 /// length4 bsv_length; /* length of slice in bytes */ 357 /// uint32_t bsv_volume; /* array index of sliced 358 /// volume */ 359 /// }; 360 /// 361 /// struct pnfs_block_concat_volume_info4 { 362 /// uint32_t bcv_volumes<>; /* array indices of volumes 363 /// which are concatenated */ 364 /// }; 365 /// 366 /// struct pnfs_block_stripe_volume_info4 { 367 /// length4 bsv_stripe_unit; /* size of stripe in bytes */ 368 /// uint32_t bsv_volumes<>; /* array indices of volumes 369 /// which are striped across -- 370 /// MUST be same size */ 371 /// }; 372 /// 373 /// union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) { 374 /// case PNFS_BLOCK_VOLUME_SIMPLE: 375 /// pnfs_block_simple_volume_info4 bv_simple_info; 376 /// case PNFS_BLOCK_VOLUME_SLICE: 377 /// pnfs_block_slice_volume_info4 bv_slice_info; 378 /// case PNFS_BLOCK_VOLUME_CONCAT: 379 /// pnfs_block_concat_volume_info4 bv_concat_info; 380 /// case PNFS_BLOCK_VOLUME_STRIPE: 381 /// pnfs_block_stripe_volume_info4 bv_stripe_info; 382 /// }; 383 /// 384 /// /* block layout specific type for da_addr_body */ 385 /// struct pnfs_block_deviceaddr4 { 386 /// pnfs_block_volume4 bda_volumes<>; /* array of volumes */ 387 /// }; 388 /// 390 The "pnfs_block_deviceaddr4" data structure is a structure that 391 allows arbitrarily complex nested volume structures to be encoded. 392 The types of aggregations that are allowed are stripes, 393 concatenations, and slices. Note that the volume topology expressed 394 in the pnfs_block_deviceaddr4 data structure will always resolve to a 395 set of pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE. The array 396 of volumes is ordered such that the root of the volume hierarchy is 397 the last element of the array. Concat, slice and stripe volumes MUST 398 refer to volumes defined by lower indexed elements of the array. 400 The "pnfs_block_device_addr4" data structure is returned by the 401 server as the storage-protocol-specific opaque field da_addr_body in 402 the "device_addr4" structure by a successful GETDEVICEINFO operation. 403 [NFSV4.1]. 405 As noted above, all device_addr4 structures eventually resolve to a 406 set of volumes of type PNFS_BLOCK_VOLUME_SIMPLE. These volumes are 407 each uniquely identified by a set of signature components. 408 Complicated volume hierarchies may be composed of dozens of volumes 409 each with several signature components, thus the device address may 410 require several kilobytes. The client SHOULD be prepared to allocate 411 a large buffer to contain the result. In the case of the server 412 returning NFS4ERR_TOOSMALL the client SHOULD allocate a buffer of at 413 least gdir_mincount_bytes to contain the expected result and retry 414 the GETDEVICEINFO request. 416 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4 418 The server in response to a GETDEVICELIST request typically will 419 return a single "deviceid4" in the gdlr_deviceid_list array. This is 420 because the deviceid4 when passed to GETDEVICEINFO will return a 421 "device_addr4" which encodes the entire volume hierarchy. In the 422 case of copy-on-write file systems, the "gdlr_deviceid_list" array 423 may contain two deviceid4's, one referencing the read-only volume 424 hierarchy, and one referencing the writable volume hierarchy. There 425 is no required ordering of the readable and writable ids in the array 426 as the volumes are uniquely identified by their deviceid4, and are 427 referred to by layouts using the deviceid4. Another example of the 428 server returning multiple device items occurs when the file handle 429 represents the root of a name space spanning multiple physical file 430 systems on the server, each with a different volume hierarchy. In 431 this example a server implementation may return either a list of 432 deviceids used by each of the physical file systems, or it may return 433 an empty list. 435 Each deviceid4 returned by a successful GETDEVICELIST operation is a 436 shorthand id used to reference the whole volume topology. These 437 device ids, as well as device ids return in extents of a LAYOUTGET 438 operation, can be used as input to the GETDEVICEINFO operation. 439 Decoding the "pnfs_block_deviceaddr4" results in a flat ordering of 440 data blocks mapped to PNFS_BLOCK_VOLUME_SIMPLE volumes. Combined 441 with the mapping to a client LUN described in 2.2.1 Volume 442 Identification, a logical volume offset can be mapped to a block on a 443 pNFS client LUN. [NFSV4.1] 445 2.3. Data Structures: Extents and Extent Lists 447 A pNFS block layout is a list of extents within a flat array of data 448 blocks in a logical volume. The details of the volume topology can 449 be determined by using the GETDEVICEINFO operation (see discussion of 450 volume identification, section 2.2 above). The block layout 451 describes the individual block extents on the volume that make up the 452 file. The offsets and length contained in an extent are specified in 453 units of bytes. 455 /// enum pnfs_block_extent_state4 { 456 /// PNFS_BLOCK_READ_WRITE_DATA = 0,/* the data located by this 457 /// extent is valid 458 /// for reading and writing. */ 459 /// PNFS_BLOCK_READ_DATA = 1, /* the data located by this 460 /// extent is valid for reading 461 /// only; it may not be 462 /// written. */ 463 /// PNFS_BLOCK_INVALID_DATA = 2, /* the location is valid; the 464 /// data is invalid. It is a 465 /// newly (pre-) allocated 466 /// extent. There is physical 467 /// space on the volume. */ 468 /// PNFS_BLOCK_NONE_DATA = 3 /* the location is invalid. It 469 /// is a hole in the file. 470 /// There is no physical space 471 /// on the volume. */ 472 /// }; 474 /// 475 /// struct pnfs_block_extent4 { 476 /// deviceid4 bex_vol_id; /* id of logical volume on 477 /// which extent of file is 478 /// stored. */ 479 /// offset4 bex_file_offset; /* the starting byte offset in 480 /// the file */ 481 /// length4 bex_length; /* the size in bytes of the 482 /// extent */ 483 /// offset4 bex_storage_offset; /* the starting byte offset 484 /// in the volume */ 485 /// pnfs_block_extent_state4 bex_state; 486 /// /* the state of this extent */ 487 /// }; 488 /// 489 /// /* block layout specific type for loc_body */ 490 /// struct pnfs_block_layout4 { 491 /// pnfs_block_extent4 blo_extents<>; 492 /// /* extents which make up this 493 /// layout. */ 494 /// }; 495 /// 497 The block layout consists of a list of extents which map the logical 498 regions of the file to physical locations on a volume. The 499 "bex_storage_offset" field within each extent identifies a location 500 on the logical volume specified by the "bex_vol_id" field in the 501 extent. The bex_vol_id itself is shorthand for the whole topology of 502 the logical volume on which the file is stored. The client is 503 responsible for translating this logical offset into an offset on the 504 appropriate underlying SAN logical unit. In most cases all extents 505 in a layout will reside on the same volume and thus have the same 506 bex_vol_id. In the case of copy on write file systems, the 507 PNFS_BLOCK_READ_DATA extents may have a different bex_vol_id from the 508 writable extents. 510 Each extent maps a logical region of the file onto a portion of the 511 specified logical volume. The bex_file_offset, bex_length, and 512 bex_state fields for an extent returned from the server are valid for 513 all extents. In contrast, the interpretation of the 514 bex_storage_offset field depends on the value of bex_state as follows 515 (in increasing order): 517 o PNFS_BLOCK_READ_WRITE_DATA means that bex_storage_offset is valid, 518 and points to valid/initialized data that can be read and written. 520 o PNFS_BLOCK_READ_DATA means that bex_storage_offset is valid and 521 points to valid/ initialized data which can only be read. Write 522 operations are prohibited; the client may need to request a read- 523 write layout. 525 o PNFS_BLOCK_INVALID_DATA means that bex_storage_offset is valid, 526 but points to invalid un-initialized data. This data must not be 527 physically read from the disk until it has been initialized. A 528 read request for a PNFS_BLOCK_INVALID_DATA extent must fill the 529 user buffer with zeros, unless the extent is covered by a 530 PNFS_BLOCK_READ_DATA extent of a copy-on-write file system. Write 531 requests must write whole server-sized blocks to the disk; bytes 532 not initialized by the user must be set to zero. Any write to 533 storage in a PNFS_BLOCK_INVALID_DATA extent changes the written 534 portion of the extent to PNFS_BLOCK_READ_WRITE_DATA; the pNFS 535 client is responsible for reporting this change via LAYOUTCOMMIT. 537 o PNFS_BLOCK_NONE_DATA means that bex_storage_offset is not valid, 538 and this extent may not be used to satisfy write requests. Read 539 requests may be satisfied by zero-filling as for 540 PNFS_BLOCK_INVALID_DATA. PNFS_BLOCK_NONE_DATA extents may be 541 returned by requests for readable extents; they are never returned 542 if the request was for a writeable extent. 544 An extent list lists all relevant extents in increasing order of the 545 bex_file_offset of each extent; any ties are broken by increasing 546 order of the extent state (bex_state). 548 2.3.1. Layout Requests and Extent Lists 550 Each request for a layout specifies at least three parameters: file 551 offset, desired size, and minimum size. If the status of a request 552 indicates success, the extent list returned must meet the following 553 criteria: 555 o A request for a readable (but not writeable) layout returns only 556 PNFS_BLOCK_READ_DATA or PNFS_BLOCK_NONE_DATA extents (but not 557 PNFS_BLOCK_INVALID_DATA or PNFS_BLOCK_READ_WRITE_DATA extents). 559 o A request for a writeable layout returns 560 PNFS_BLOCK_READ_WRITE_DATA or PNFS_BLOCK_INVALID_DATA extents (but 561 not PNFS_BLOCK_NONE_DATA extents). It may also return 562 PNFS_BLOCK_READ_DATA extents only when the offset ranges in those 563 extents are also covered by PNFS_BLOCK_INVALID_DATA extents to 564 permit writes. 566 o The first extent in the list MUST contain the requested starting 567 offset. 569 o The total size of extents within the requested range MUST cover at 570 least the minimum size. One exception is allowed: the total size 571 MAY be smaller if only readable extents were requested and EOF is 572 encountered. 574 o Extents in the extent list MUST be logically contiguous for a 575 read-only layout. For a read-write layout, the set of writable 576 extents (i.e., excluding PNFS_BLOCK_READ_DATA extents) MUST be 577 logically contiguous. Every PNFS_BLOCK_READ_DATA extent in a 578 read-write layout MUST be covered by one or more 579 PNFS_BLOCK_INVALID_DATA extents. This overlap of 580 PNFS_BLOCK_READ_DATA and PNFS_BLOCK_INVALID_DATA extents is the 581 only permitted extent overlap. 583 o Extents MUST be ordered in the list by starting offset, with 584 PNFS_BLOCK_READ_DATA extents preceding PNFS_BLOCK_INVALID_DATA 585 extents in the case of equal bex_file_offsets. 587 If the minimum requested size, loga_minlength, is zero, this is an 588 indication to the metadata server that the client desires any layout 589 at offset loga_offset or less that the metadata server has "readily 590 available". Readily is subjective, and depends on the layout type 591 and the pNFS server implementation. For block layout servers, 592 readily available SHOULD be interpreted such that readable layouts 593 are always available, even if some extents are in the 594 PNFS_BLOCK_NONE_DATA state. When processing requests for writable 595 layouts, a layout is readily available if extents can be returned in 596 the PNFS_BLOCK_READ_WRITE_DATA state. 598 2.3.2. Layout Commits 600 /// /* block layout specific type for lou_body */ 601 /// struct pnfs_block_layoutupdate4 { 602 /// pnfs_block_extent4 blu_commit_list<>; 603 /// /* list of extents which 604 /// * now contain valid data. 605 /// */ 606 /// }; 607 /// 609 The "pnfs_block_layoutupdate4" structure is used by the client as the 610 block-protocol specific argument in a LAYOUTCOMMIT operation. The 611 "blu_commit_list" field is an extent list covering regions of the 612 file layout that were previously in the PNFS_BLOCK_INVALID_DATA 613 state, but have been written by the client and should now be 614 considered in the PNFS_BLOCK_READ_WRITE_DATA state. The bex_state 615 field of each extent in the blu_commit_list MUST be set to 616 PNFS_BLOCK_READ_WRITE_DATA. The extents in the commit list MUST be 617 disjoint and MUST be sorted by bex_file_offset. The 618 bex_storage_offset field is unused. Implementers should be aware 619 that a server may be unable to commit regions at a granularity 620 smaller than a file-system block (typically 4KB or 8KB). As noted 621 above, the block-size that the server uses is available as an NFSv4 622 attribute, and any extents included in the "blu_commit_list" MUST be 623 aligned to this granularity and have a size that is a multiple of 624 this granularity. If the client believes that its actions have moved 625 the end-of-file into the middle of a block being committed, the 626 client MUST write zeroes from the end-of-file to the end of that 627 block before committing the block. Failure to do so may result in 628 junk (uninitialized data) appearing in that area if the file is 629 subsequently extended by moving the end-of-file. 631 2.3.3. Layout Returns 633 The LAYOUTRETURN operation is done without any block layout specific 634 data. When the LAYOUTRETURN operation specifies a 635 LAYOUTRETURN4_FILE_return type, then the layoutreturn_file4 data 636 structure specifies the region of the file layout that is no longer 637 needed by the client. The opaque "lrf_body" field of the 638 "layoutreturn_file4" data structure MUST have length zero. A 639 LAYOUTRETURN operation represents an explicit release of resources by 640 the client, usually done for the purpose of avoiding unnecessary 641 CB_LAYOUTRECALL operations in the future. The client may return 642 disjoint regions of the file by using multiple LAYOUTRETURN 643 operations within a single COMPOUND operation. 645 Note that the block/volume layout supports unilateral layout 646 revocation. When a layout is unilaterally revoked by the server, 647 usually due to the client's lease time expiring, or a delegation 648 being recalled, or the client failing to return a layout in a timely 649 manner, it is important for the sake of correctness that any in- 650 flight I/Os that the client issued before the layout was revoked are 651 rejected at the storage. For the block/volume protocol, this is 652 possible by fencing a client with an expired layout timer from the 653 physical storage. Note, however, that the granularity of this 654 operation can only be at the host/logical-unit level. Thus, if one 655 of a client's layouts is unilaterally revoked by the server, it will 656 effectively render useless *all* of the client's layouts for files 657 located on the storage units comprising the logical volume. This may 658 render useless the client's layouts for files in other file systems. 660 2.3.4. Client Copy-on-Write Processing 662 Copy-on-write is a mechanism used to support file and/or file system 663 snapshots. When writing to unaligned regions, or to regions smaller 664 than a file system block, the writer must copy the portions of the 665 original file data to a new location on disk. This behavior can 666 either be implemented on the client or the server. The paragraphs 667 below describe how a pNFS block layout client implements access to a 668 file which requires copy-on-write semantics. 670 Distinguishing the PNFS_BLOCK_READ_WRITE_DATA and 671 PNFS_BLOCK_READ_DATA extent types in combination with the allowed 672 overlap of PNFS_BLOCK_READ_DATA extents with PNFS_BLOCK_INVALID_DATA 673 extents allows copy-on-write processing to be done by pNFS clients. 674 In classic NFS, this operation would be done by the server. Since 675 pNFS enables clients to do direct block access, it is useful for 676 clients to participate in copy-on-write operations. All block/volume 677 pNFS clients MUST support this copy-on-write processing. 679 When a client wishes to write data covered by a PNFS_BLOCK_READ_DATA 680 extent, it MUST have requested a writable layout from the server; 681 that layout will contain PNFS_BLOCK_INVALID_DATA extents to cover all 682 the data ranges of that layout's PNFS_BLOCK_READ_DATA extents. More 683 precisely, for any bex_file_offset range covered by one or more 684 PNFS_BLOCK_READ_DATA extents in a writable layout, the server MUST 685 include one or more PNFS_BLOCK_INVALID_DATA extents in the layout 686 that cover the same bex_file_offset range. When performing a write 687 to such an area of a layout, the client MUST effectively copy the 688 data from the PNFS_BLOCK_READ_DATA extent for any partial blocks of 689 bex_file_offset and range, merge in the changes to be written, and 690 write the result to the PNFS_BLOCK_INVALID_DATA extent for the blocks 691 for that bex_file_offset and range. That is, if entire blocks of 692 data are to be overwritten by an operation, the corresponding 693 PNFS_BLOCK_READ_DATA blocks need not be fetched, but any partial- 694 block writes must be merged with data fetched via 695 PNFS_BLOCK_READ_DATA extents before storing the result via 696 PNFS_BLOCK_INVALID_DATA extents. For the purposes of this 697 discussion, "entire blocks" and "partial blocks" refer to the 698 server's file-system block size. Storing of data in a 699 PNFS_BLOCK_INVALID_DATA extent converts the written portion of the 700 PNFS_BLOCK_INVALID_DATA extent to a PNFS_BLOCK_READ_WRITE_DATA 701 extent; all subsequent reads MUST be performed from this extent; the 702 corresponding portion of the PNFS_BLOCK_READ_DATA extent MUST NOT be 703 used after storing data in a PNFS_BLOCK_INVALID_DATA extent. If a 704 client writes only a portion of an extent, the extent may be split at 705 block aligned boundaries. 707 When a client wishes to write data to a PNFS_BLOCK_INVALID_DATA 708 extent that is not covered by a PNFS_BLOCK_READ_DATA extent, it MUST 709 treat this write identically to a write to a file not involved with 710 copy-on-write semantics. Thus, data must be written in at least 711 block size increments, aligned to multiples of block sized offsets, 712 and unwritten portions of blocks must be zero filled. 714 In the LAYOUTCOMMIT operation that normally sends updated layout 715 information back to the server, for writable data, some 716 PNFS_BLOCK_INVALID_DATA extents may be committed as 717 PNFS_BLOCK_READ_WRITE_DATA extents, signifying that the storage at 718 the corresponding bex_storage_offset values has been stored into and 719 is now to be considered as valid data to be read. 720 PNFS_BLOCK_READ_DATA extents are not committed to the server. For 721 extents that the client receives via LAYOUTGET as 722 PNFS_BLOCK_INVALID_DATA and returns via LAYOUTCOMMIT as 723 PNFS_BLOCK_READ_WRITE_DATA, the server will understand that the 724 PNFS_BLOCK_READ_DATA mapping for that extent is no longer valid or 725 necessary for that file. 727 2.3.5. Extents are Permissions 729 Layout extents returned to pNFS clients grant permission to read or 730 write; PNFS_BLOCK_READ_DATA and PNFS_BLOCK_NONE_DATA are read-only 731 (PNFS_BLOCK_NONE_DATA reads as zeroes), PNFS_BLOCK_READ_WRITE_DATA 732 and PNFS_BLOCK_INVALID_DATA are read/write, (PNFS_BLOCK_INVALID_DATA 733 reads as zeros, any write converts it to PNFS_BLOCK_READ_WRITE_DATA). 734 This is the only client means of obtaining permission to perform 735 direct I/O to storage devices; a pNFS client MUST NOT perform direct 736 I/O operations that are not permitted by an extent held by the 737 client. Client adherence to this rule places the pNFS server in 738 control of potentially conflicting storage device operations, 739 enabling the server to determine what does conflict and how to avoid 740 conflicts by granting and recalling extents to/from clients. 742 Block/volume class storage devices are not required to perform read 743 and write operations atomically. Overlapping concurrent read and 744 write operations to the same data may cause the read to return a 745 mixture of before-write and after-write data. Overlapping write 746 operations can be worse, as the result could be a mixture of data 747 from the two write operations; data corruption can occur if the 748 underlying storage is striped and the operations complete in 749 different orders on different stripes. A pNFS server can avoid these 750 conflicts by implementing a single writer XOR multiple readers 751 concurrency control policy when there are multiple clients who wish 752 to access the same data. This policy MUST be implemented when 753 storage devices do not provide atomicity for concurrent read/write 754 and write/write operations to the same data. 756 If a client makes a layout request that conflicts with an existing 757 layout delegation, the request will be rejected with the error 758 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 759 request after a short interval. During this interval the server 760 SHOULD recall the conflicting portion of the layout delegation from 761 the client that currently holds it. This reject-and-retry approach 762 does not prevent client starvation when there is contention for the 763 layout of a particular file. For this reason a pNFS server SHOULD 764 implement a mechanism to prevent starvation. One possibility is that 765 the server can maintain a queue of rejected layout requests. Each 766 new layout request can be checked to see if it conflicts with a 767 previous rejected request, and if so, the newer request can be 768 rejected. Once the original requesting client retries its request, 769 its entry in the rejected request queue can be cleared, or the entry 770 in the rejected request queue can be removed when it reaches a 771 certain age. 773 NFSv4 supports mandatory locks and share reservations. These are 774 mechanisms that clients can use to restrict the set of I/O operations 775 that are permissible to other clients. Since all I/O operations 776 ultimately arrive at the NFSv4 server for processing, the server is 777 in a position to enforce these restrictions. However, with pNFS 778 layouts, I/Os will be issued from the clients that hold the layouts 779 directly to the storage devices that host the data. These devices 780 have no knowledge of files, mandatory locks, or share reservations, 781 and are not in a position to enforce such restrictions. For this 782 reason the NFSv4 server MUST NOT grant layouts that conflict with 783 mandatory locks or share reservations. Further, if a conflicting 784 mandatory lock request or a conflicting open request arrives at the 785 server, the server MUST recall the part of the layout in conflict 786 with the request before granting the request. 788 2.3.6. End-of-file Processing 790 The end-of-file location can be changed in two ways: implicitly as 791 the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 792 or explicitly as the result of a SETATTR request. Typically, when a 793 file is truncated by an NFSv4 client via the SETATTR call, the server 794 frees any disk blocks belonging to the file which are beyond the new 795 end-of-file byte, and MUST write zeros to the portion of the new end- 796 of-file block beyond the new end-of-file byte. These actions render 797 any pNFS layouts which refer to the blocks that are freed or written 798 semantically invalid. Therefore, the server MUST recall from clients 799 the portions of any pNFS layouts which refer to blocks that will be 800 freed or written by the server before processing the truncate 801 request. These recalls may take time to complete; as explained in 802 [NFSv4.1], if the server cannot respond to the client SETATTR request 803 in a reasonable amount of time, it SHOULD reply to the client with 804 the error NFS4ERR_DELAY. 806 Blocks in the PNFS_BLOCK_INVALID_DATA state which lie beyond the new 807 end-of-file block present a special case. The server has reserved 808 these blocks for use by a pNFS client with a writable layout for the 809 file, but the client has yet to commit the blocks, and they are not 810 yet a part of the file mapping on disk. The server MAY free these 811 blocks while processing the SETATTR request. If so, the server MUST 812 recall any layouts from pNFS clients which refer to the blocks before 813 processing the truncate. If the server does not free the 814 PNFS_BLOCK_INVALID_DATA blocks while processing the SETATTR request, 815 it need not recall layouts which refer only to the PNFS_BLOCK_INVALID 816 DATA blocks. 818 When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 819 the current end-of-file, or extended explicitly by a SETATTR request, 820 the server need not recall any portions of any pNFS layouts. 822 2.3.7. Layout Hints 824 The SETATTR operation supports a layout hint attribute [NFSv4.1]. 825 When the client sets a layout hint (data type layouthint4) with a 826 layout type of LAYOUT4_BLOCK_VOLUME (the loh_type field), the 827 loh_body field contains a value of data type pnfs_block_layouthint4. 829 /// /* block layout specific type for loh_body */ 830 /// struct pnfs_block_layouthint4 { 831 /// uint64_t blh_maximum_io_time; /* maximum i/o time in seconds 832 /// */ 833 /// }; 834 /// 836 The block layout client uses the layout hint data structure to 837 communicate to the server the maximum time that it may take an I/O to 838 execute on the client. Clients using block layouts MUST set the 839 layout hint attribute before using LAYOUTGET operations. 841 2.3.8. Client Fencing 843 The pNFS block protocol must handle situations in which a system 844 failure, typically a network connectivity issue, requires the server 845 to unilaterally revoke extents from one client in order to transfer 846 the extents to another client. The pNFS server implementation MUST 847 ensure that when resources are transferred to another client, they 848 are not used by the client originally owning them, and this must be 849 ensured against any possible combination of partitions and delays 850 among all of the participants to the protocol (server, storage and 851 client). Two approaches to guaranteeing this isolation are possible 852 and are discussed below. 854 One implementation choice for fencing the block client from the block 855 storage is the use of LUN masking or mapping at the storage systems 856 or storage area network to disable access by the client to be 857 isolated. This requires server access to a management interface for 858 the storage system and authorization to perform LUN masking and 859 management operations. For example, SMI-S [SMIS] provides a means to 860 discover and mask LUNs, including a means of associating clients with 861 the necessary World Wide Names or Initiator names to be masked. 863 In the absence of support for LUN masking, the server has to rely on 864 the clients to implement a timed lease I/O fencing mechanism. 865 Because clients do not know if the server is using LUN masking, in 866 all cases the client MUST implement timed lease fencing. In timed 867 lease fencing we define two time periods, the first, "lease_time" is 868 the length of a lease as defined by the server's lease_time attribute 869 (see [NFSV4.1]), and the second, "blh_maximum_io_time" is the maximum 870 time it can take for a client I/O to the storage system to either 871 complete or fail; this value is often 30 seconds or 60 seconds, but 872 may be longer in some environments. If the maximum client I/O time 873 cannot be bounded, the client MUST use a value of all 1s as the 874 blh_maximum_io_time. 876 The client MUST use SETATTR with a layout hint of type 877 LAYOUT4_BLOCK_VOLUME to inform the server of its maximum I/O time 878 prior to issuing the first LAYOUTGET operation. The maximum io time 879 hint is a per client attribute, and as such the server SHOULD 880 maintain the value set by each client. A server which implements 881 fencing via LUN masking SHOULD accept any maximum io time value from 882 a client. A server which does not implement fencing may return an 883 error NFS4ERR_INVAL to the SETATTR operation. Such a server SHOULD 884 return NFS4ERR_INVAL when a client sends an unbounded maximum I/O 885 time (all 1s), or when the maximum I/O time is significantly greater 886 than that of other clients using block layouts with pNFS. 888 When a client receives the error NFS4ERR_INVAL in response to the 889 SETATTR operation for a layout hint, the client MUST NOT use the 890 LAYOUTGET operation. After responding with NFS4ERR_INVAL to the 891 SETATTR for layout hint, the server MUST return the error 892 NFS4ERR_LAYOUTUNAVAILABLE to all subsequent LAYOUTGET operations from 893 that client. Thus the server, by returning either NFS4ERR_INVAL or 894 NFS4_OK determines whether or not a client with a large, or an 895 unbounded maximum I/O time may use pNFS. 897 Using the lease time and the maximum i/o time values, we specify the 898 behavior of the client and server as follows. 900 When a client receives layout information via a LAYOUTGET operation, 901 those layouts are valid for at most "lease_time" seconds from when 902 the server granted them. A layout is renewed by any successful 903 SEQUENCE operation, or whenever a new stateid is created or updated 904 (see the section "Lease Renewal" of [NFSV4.1]). If the layout lease 905 is not renewed prior to expiration, the client MUST cease to use the 906 layout after "lease_time" seconds from when it either sent the 907 original LAYOUTGET command, or sent the last operation renewing the 908 lease. In other words, the client may not issue any I/O to blocks 909 specified by an expired layout. In the presence of large 910 communication delays between the client and server it is even 911 possible for the lease to expire prior to the server response 912 arriving at the client. In such a situation the client MUST NOT use 913 the expired layouts, and SHOULD revert to using standard NFSv41 READ 914 and WRITE operations. Furthermore, the client must be configured 915 such that I/O operations complete within the "blh_maximum_io_time" 916 even in the presence of multipath drivers that will retry I/Os via 917 multiple paths. 919 As stated in the section "Dealing with Lease Expiration on the 920 Client" of [NFSV4.1], if any SEQUENCE operation is successful, but 921 sr_status_flag has SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED, 922 SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED, or 923 SEQ4_STATUS_ADMIN_STATE_REVOKED set, the client MUST immediately 924 cease to use all layouts and device id to device address mappings 925 associated with the corresponding server. 927 In the absence of known two way communication between the client and 928 the server on the fore channel, the server must wait for at least the 929 time period "lease_time" plus "blh_maximum_io_time" before 930 transferring layouts from the original client to any other client. 931 The server, like the client, must take a conservative approach, and 932 start the lease expiration timer from the time that it received the 933 operation which last renewed the lease. 935 2.4. Crash Recovery Issues 937 A critical requirement in crash recovery is that both the client and 938 the server know when the other has failed. Additionally, it is 939 required that a client sees a consistent view of data across server 940 restarts. These requirements and a full discussion of crash recovery 941 issues are covered in the "Crash Recovery" section of the NFSv41 942 specification [NFSV4.1]. This document contains additional crash 943 recovery material specific only to the block/volume layout. 945 When the server crashes while the client holds a writable layout, and 946 the client has written data to blocks covered by the layout, and the 947 blocks are still in the PNFS_BLOCK_INVALID_DATA state, the client has 948 two options for recovery. If the data that has been written to these 949 blocks is still cached by the client, the client can simply re-write 950 the data via NFSv4, once the server has come back online. However, 951 if the data is no longer in the client's cache, the client MUST NOT 952 attempt to source the data from the data servers. Instead, it should 953 attempt to commit the blocks in question to the server during the 954 server's recovery grace period, by sending a LAYOUTCOMMIT with the 955 "loca_reclaim" flag set to true. This process is described in detail 956 in [NFSv4.1] section 18.42.4. 958 2.5. Recalling resources: CB_RECALL_ANY 960 The server may decide that it cannot hold all of the state for 961 layouts without running out of resources. In such a case, it is free 962 to recall individual layouts using CB_LAYOUTRECALL to reduce the 963 load, or it may choose to request that the client return any layout. 965 The NFSv4.1 spec [NFSv4.1] defines the following types: 967 const RCA4_TYPE_MASK_BLK_LAYOUT = 4; 969 struct CB_RECALL_ANY4args { 970 uint32_t craa_objects_to_keep; 971 bitmap4 craa_type_mask; 972 }; 974 When the server sends a CB_RECALL_ANY request to a client specifying 975 the RCA4_TYPE_MASK_BLK_LAYOUT bit in craa_type_mask, the client 976 should immediately respond with NFS4_OK, and then asynchronously 977 return complete file layouts until the number of files with layouts 978 cached on the client is less than craa_object_to_keep. 980 2.6. Transient and Permanent Errors 982 The server may respond to LAYOUTGET with a variety of error statuses. 983 These errors can convey transient conditions or more permanent 984 conditions that are unlikely to be resolved soon. 986 The transient errors, NFS4ERR_RECALLCONFLICT and NFS4ERR_TRYLATER are 987 used to indicate that the server cannot immediately grant the layout 988 to the client. In the former case this is because the server has 989 recently issued a CB_LAYOUTRECALL to the requesting client, whereas 990 in the case of NFS4ERR_TRYLATER, the server cannot grant the request 991 possibly due to sharing conflicts with other clients. In either 992 case, a reasonable approach for the client is to wait several 993 milliseconds and retry the request. The client SHOULD track the 994 number of retries, and if forward progress is not made, the client 995 SHOULD send the READ or WRITE operation directly to the server. 997 The error NFS4ERR_LAYOUTUNAVAILABLE may be returned by the server if 998 layouts are not supported for the requested file or its containing 999 file system. The server may also return this error code if the 1000 server is the progress of migrating the file from secondary storage, 1001 or for any other reason which causes the server to be unable to 1002 supply the layout. As a result of receiving 1003 NFS4ERR_LAYOUTUNAVAILABLE, the client SHOULD send future READ and 1004 WRITE requests directly to the server. It is expected that a client 1005 will not cache the file's layoutunavailable state forever, particular 1006 if the file is closed, and thus eventually, the client MAY reissue a 1007 LAYOUTGET operation. 1009 3. Security Considerations 1011 Typically, SAN disk arrays and SAN protocols provide access control 1012 mechanisms (e.g., logical unit number mapping and/or masking) which 1013 operate at the granularity of individual hosts. The functionality 1014 provided by such mechanisms makes it possible for the server to 1015 "fence" individual client machines from certain physical disks---that 1016 is to say, to prevent individual client machines from reading or 1017 writing to certain physical disks. Finer-grained access control 1018 methods are not generally available. For this reason, certain 1019 security responsibilities are delegated to pNFS clients for 1020 block/volume layouts. Block/volume storage systems generally control 1021 access at a volume granularity, and hence pNFS clients have to be 1022 trusted to only perform accesses allowed by the layout extents they 1023 currently hold (e.g., and not access storage for files on which a 1024 layout extent is not held). In general, the server will not be able 1025 to prevent a client which holds a layout for a file from accessing 1026 parts of the physical disk not covered by the layout. Similarly, the 1027 server will not be able to prevent a client from accessing blocks 1028 covered by a layout that it has already returned. This block-based 1029 level of protection must be provided by the client software. 1031 An alternative method of block/volume protocol use is for the storage 1032 devices to export virtualized block addresses, which do reflect the 1033 files to which blocks belong. These virtual block addresses are 1034 exported to pNFS clients via layouts. This allows the storage device 1035 to make appropriate access checks, while mapping virtual block 1036 addresses to physical block addresses. In environments where the 1037 security requirements are such that client-side protection from 1038 access to storage outside of the authorized layout extents is not 1039 sufficient, pNFS block/volume storage layouts SHOULD NOT be used 1040 unless the storage device is able to implement the appropriate access 1041 checks, via use of virtualized block addresses or other means. In 1042 contrast, an environment where client-side protection may suffice 1043 consists of co-located clients, server and storage systems in a 1044 datacenter with a physically isolated SAN under control of a single 1045 system administrator or small group of system administrators. 1047 This also has implications for some NFSv4 functionality outside pNFS. 1048 For instance, if a file is covered by a mandatory read-only lock, the 1049 server can ensure that only readable layouts for the file are granted 1050 to pNFS clients. However, it is up to each pNFS client to ensure 1051 that the readable layout is used only to service read requests, and 1052 not to allow writes to the existing parts of the file. Similarly, 1053 block/volume storage devices are unable to validate NFS Access 1054 Control Lists (ACLs) and file open modes, so the client must enforce 1055 the policies before sending a read or write request to the storage 1056 device. Since block/volume storage systems are generally not capable 1057 of enforcing such file-based security, in environments where pNFS 1058 clients cannot be trusted to enforce such policies, pNFS block/volume 1059 storage layouts SHOULD NOT be used. 1061 Access to block/volume storage is logically at a lower layer of the 1062 I/O stack than NFSv4, and hence NFSv4 security is not directly 1063 applicable to protocols that access such storage directly. Depending 1064 on the protocol, some of the security mechanisms provided by NFSv4 1065 (e.g., encryption, cryptographic integrity) may not be available, or 1066 may be provided via different means. At one extreme, pNFS with 1067 block/volume storage can be used with storage access protocols (e.g., 1068 parallel SCSI) that provide essentially no security functionality. 1069 At the other extreme, pNFS may be used with storage protocols such as 1070 iSCSI that can provide significant security functionality. It is the 1071 responsibility of those administering and deploying pNFS with a 1072 block/volume storage access protocol to ensure that appropriate 1073 protection is provided to that protocol (physical security is a 1074 common means for protocols not based on IP). In environments where 1075 the security requirements for the storage protocol cannot be met, 1076 pNFS block/volume storage layouts SHOULD NOT be used. 1078 When security is available for a storage protocol, it is generally at 1079 a different granularity and with a different notion of identity than 1080 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 1081 initiator access to volumes). The responsibility for enforcing 1082 appropriate correspondences between these security layers is placed 1083 upon the pNFS client. As with the issues in the first paragraph of 1084 this section, in environments where the security requirements are 1085 such that client-side protection from access to storage outside of 1086 the layout is not sufficient, pNFS block/volume storage layouts 1087 SHOULD NOT be used. 1089 4. Conclusions 1091 This draft specifies the block/volume layout type for pNFS and 1092 associated functionality. 1094 5. IANA Considerations 1096 There are no IANA considerations in this document. All pNFS IANA 1097 Considerations are covered in [NFSV4.1]. 1099 6. Acknowledgments 1101 This draft draws extensively on the authors' familiarity with the 1102 mapping functionality and protocol in EMC's MPFS (previously named 1103 HighRoad) system [MPFS]. The protocol used by MPFS is called FMP 1104 (File Mapping Protocol); it is an add-on protocol that runs in 1105 parallel with file system protocols such as NFSv3 to provide pNFS- 1106 like functionality for block/volume storage. While drawing on FMP, 1107 the data structures and functional considerations in this draft 1108 differ in significant ways, based on lessons learned and the 1109 opportunity to take advantage of NFSv4 features such as COMPOUND 1110 operations. The design to support pNFS client participation in copy- 1111 on-write is based on text and ideas contributed by Craig Everhart. 1113 Andy Adamson, Ben Campbell, Richard Chandler, Benny Halevy, Fredric 1114 Isaman, and Mario Wurzl all helped to review drafts of this 1115 specification. 1117 7. References 1119 7.1. Normative References 1121 [LEGAL] IETF Trust, "Legal Provisions Relating to IETF Documents", 1122 URL http://trustee.ietf.org/docs/IETF-Trust-License- 1123 Policy.pdf, November 2008. 1125 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1126 Requirement Levels", BCP 14, RFC 2119, March 1997. 1128 [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 1129 Version 1", draft-ietf-nfsv4-minorversion1-26.txt, Internet 1130 Draft, September 2008. 1132 [XDR] Eisler, M., "XDR: External Data Representation Standard", 1133 STD 67, RFC 4506, May 2006. 1135 7.2. Informative References 1137 [MPFS] EMC Corporation, "EMC Celerra Multi-Path File System", EMC 1138 Data Sheet, available at: 1139 http://www.emc.com/collateral/software/data-sheet/h2006-celerra-mpfs- 1140 mpfsi.pdf 1141 link checked 13 March 2008 1143 [SMIS] SNIA, "SNIA Storage Management Initiative Specification", 1144 version 1.0.2, available at: 1145 http://www.snia.org/tech_activities/standards/curr_standards/smi/SMI- 1146 S_Technical_Position_v1.0.3r1.pdf 1147 link checked 13 March 2008 1149 Authors' Addresses 1151 David L. Black 1152 EMC Corporation 1153 176 South Street 1154 Hopkinton, MA 01748 1156 Phone: +1 (508) 293-7953 1157 Email: black_david@emc.com 1159 Stephen Fridella 1160 EMC Corporation 1161 228 South Street 1162 Hopkinton, MA 01748 1164 Phone: +1 (508) 249-3528 1165 Email: fridella_stephen@emc.com 1167 Jason Glasgow 1168 Google 1169 5 Cambridge Center 1170 Cambridge, MA 02142 1172 Phone: +1 (617) 575 1599 1173 Email: jglasgow@aya.yale.edu 1175 Intellectual Property Statement 1177 The IETF takes no position regarding the validity or scope of any 1178 Intellectual Property Rights or other rights that might be claimed to 1179 pertain to the implementation or use of the technology described in 1180 this document or the extent to which any license under such rights 1181 might or might not be available; nor does it represent that it has 1182 made any independent effort to identify any such rights. Information 1183 on the procedures with respect to rights in RFC documents can be 1184 found in BCP 78 and BCP 79. 1186 Copies of IPR disclosures made to the IETF Secretariat and any 1187 assurances of licenses to be made available, or the result of an 1188 attempt made to obtain a general license or permission for the use of 1189 such proprietary rights by implementers or users of this 1190 specification can be obtained from the IETF on-line IPR repository at 1191 http://www.ietf.org/ipr. 1193 The IETF invites any interested party to bring to its attention any 1194 copyrights, patents or patent applications, or other proprietary 1195 rights that may cover technology that may be required to implement 1196 this standard. Please address the information to the IETF at ietf- 1197 ipr@ietf.org. 1199 Disclaimer of Validity 1201 This document and the information contained herein are provided on an 1202 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1203 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1204 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1205 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1206 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1207 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1209 Copyright Statement 1211 Copyright (C) The IETF Trust (2008). 1213 This document is subject to the rights, licenses and restrictions 1214 contained in BCP 78, and except as set forth therein, the authors 1215 retain all their rights. 1217 Acknowledgment 1219 Funding for the RFC Editor function is currently provided by the 1220 Internet Society.