idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1162. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1139. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1146. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1152. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 262 has weird spacing: '... opaque bsc_c...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 25, 2008) is 5624 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 Working Group D. Black 3 Internet Draft S. Fridella 4 Expires: May 25, 2009 J. Glasgow 5 Intended Status: Proposed Standard EMC Corporation 6 November 25, 2008 8 pNFS Block/Volume Layout 9 draft-ietf-nfsv4-pnfs-block-10.txt 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that 14 any applicable patent or other IPR claims of which he or she is 15 aware have been or will be disclosed, and any of which he or she 16 becomes aware will be disclosed, in accordance with Section 6 of 17 BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html 35 This Internet-Draft will expire in May 2009. 37 Abstract 39 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 40 file data on the storage used by the NFSv4 server. This ability to 41 bypass the server for data access can increase both performance and 42 parallelism, but requires additional client functionality for data 43 access, some of which is dependent on the class of storage used. The 44 main pNFS operations draft specifies storage-class-independent 45 extensions to NFS; this draft specifies the additional extensions 46 (primarily data structures) for use of pNFS with block and volume 47 based storage. 49 Conventions used in this document 51 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 52 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 53 document are to be interpreted as described in RFC-2119 [RFC2119]. 55 Table of Contents 57 1. Introduction...................................................3 58 1.1. General Definitions.......................................3 59 1.2. XDR Description of NFSv4.1 block layout...................4 60 2. Block Layout Description.......................................5 61 2.1. Background and Architecture...............................5 62 2.2. GETDEVICELIST and GETDEVICEINFO...........................6 63 2.2.1. Volume Identification................................6 64 2.2.2. Volume Topology......................................7 65 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4...........10 66 2.3. Data Structures: Extents and Extent Lists................10 67 2.3.1. Layout Requests and Extent Lists....................13 68 2.3.2. Layout Commits......................................14 69 2.3.3. Layout Returns......................................15 70 2.3.4. Client Copy-on-Write Processing.....................15 71 2.3.5. Extents are Permissions.............................17 72 2.3.6. End-of-file Processing..............................18 73 2.3.7. Layout Hints........................................19 74 2.3.8. Client Fencing......................................19 75 2.4. Crash Recovery Issues....................................21 76 2.5. Recalling resources: CB_RECALL_ANY.......................22 77 2.6. Transient and Permanent Errors...........................22 78 3. Security Considerations.......................................23 79 4. Conclusions...................................................24 80 5. IANA Considerations...........................................24 81 6. Acknowledgments...............................................25 82 7. References....................................................25 83 7.1. Normative References.....................................25 84 7.2. Informative References...................................25 85 Author's Addresses...............................................26 86 Intellectual Property Statement..................................26 87 Disclaimer of Validity...........................................27 88 Copyright Statement..............................................27 89 Acknowledgment...................................................27 91 1. Introduction 93 Figure 1 shows the overall architecture of a pNFS system: 95 +-----------+ 96 |+-----------+ +-----------+ 97 ||+-----------+ | | 98 ||| | NFSv4.1 + pNFS | | 99 +|| Clients |<------------------------------>| Server | 100 +| | | | 101 +-----------+ | | 102 ||| +-----------+ 103 ||| | 104 ||| | 105 ||| +-----------+ | 106 ||| |+-----------+ | 107 ||+----------------||+-----------+ | 108 |+-----------------||| | | 109 +------------------+|| Storage |------------+ 110 +| Systems | 111 +-----------+ 113 Figure 1 pNFS Architecture 115 The overall approach is that pNFS-enhanced clients obtain sufficient 116 information from the server to enable them to access the underlying 117 storage (on the Storage Systems) directly. See the pNFS portion of 118 [NFSV4.1] for more details. This draft is concerned with access from 119 pNFS clients to Storage Systems over storage protocols based on 120 blocks and volumes, such as the SCSI protocol family (e.g., parallel 121 SCSI, FCP for Fibre Channel, iSCSI, SAS, and FCoE). This class of 122 storage is referred to as block/volume storage. While the Server to 123 Storage System protocol is not of concern for interoperability here, 124 it will typically also be a block/volume protocol when clients use 125 block/ volume protocols. 127 1.1. General Definitions 129 The following definitions are provided for the purpose of providing 130 an appropriate context for the reader. 132 Byte 134 This document defines a byte as an octet, i.e. a datum exactly 8 135 bits in length. 137 Client 139 The "client" is the entity that accesses the NFS server's 140 resources. The client may be an application which contains the 141 logic to access the NFS server directly. The client may also be 142 the traditional operating system client that provides remote file 143 system services for a set of applications. 145 Server 147 The "Server" is the entity responsible for coordinating client 148 access to a set of file systems and is identified by a Server 149 owner. 151 1.2. XDR Description 153 This document contains the XDR ([XDR]) description of the NFSv4.1 154 block layout protocol. The XDR description is embedded in this 155 document in a way that makes it simple for the reader to extract into 156 a ready to compile form. The reader can feed this document into the 157 following shell script to produce the machine readable XDR 158 description of the NFSv4.1 block layout: 160 #!/bin/sh 161 grep '^ *///' | sed 's?^ *///??' 163 I.e. if the above script is stored in a file called "extract.sh", and 164 this document is in a file called "spec.txt", then the reader can do: 166 sh extract.sh < spec.txt > nfs4_block_layout_spec.x 168 The effect of the script is to remove both leading white space and a 169 sentinel sequence of "///" from each matching line. 171 The embedded XDR file header follows, with subsequent pieces embedded 172 throughout the document: 174 ////* 175 /// * This file was machine generated for 176 /// * draft-ietf-nfsv4-pnfs-block-09 177 /// * Last updated Wed Jun 11 10:57:06 EST 2008 178 /// */ 179 ////* 180 /// * Copyright (C) The IETF Trust (2007-2008) 181 /// * All Rights Reserved. 182 /// * 183 /// * Copyright (C) The Internet Society (1998-2006). 184 /// * All Rights Reserved. 185 /// */ 186 /// 187 ////* 188 /// * nfs4_block_layout_prot.x 189 /// */ 190 /// 191 ///%#include "nfsv41.h" 192 /// 194 The XDR code contained in this document depends on types from 195 nfsv41.x file. This includes both nfs types that end with a 4, such 196 as offset4, length4, etc, as well as more generic types such as 197 uint32_t and uint64_t. 199 2. Block Layout Description 201 2.1. Background and Architecture 203 The fundamental storage abstraction supported by block/volume storage 204 is a storage volume consisting of a sequential series of fixed size 205 blocks. This can be thought of as a logical disk; it may be realized 206 by the Storage System as a physical disk, a portion of a physical 207 disk or something more complex (e.g., concatenation, striping, RAID, 208 and combinations thereof) involving multiple physical disks or 209 portions thereof. 211 A pNFS layout for this block/volume class of storage is responsible 212 for mapping from an NFS file (or portion of a file) to the blocks of 213 storage volumes that contain the file. The blocks are expressed as 214 extents with 64 bit offsets and lengths using the existing NFSv4 215 offset4 and length4 types. Clients must be able to perform I/O to 216 the block extents without affecting additional areas of storage 217 (especially important for writes), therefore extents MUST be aligned 218 to 512-byte boundaries, and writable extents MUST be aligned to the 219 block size used by the NFSv4 server in managing the actual file 220 system (4 kilobytes and 8 kilobytes are common block sizes). This 221 block size is available as the NFSv4.1 layout_blksize attribute. 222 [NFSV4.1]. Readable extents SHOULD be aligned to the block size used 223 by the NFSv4 server, but in order to support legacy file systems with 224 fragments, alignment to 512 byte boundaries is acceptable. 226 The pNFS operation for requesting a layout (LAYOUTGET) includes the 227 "layoutiomode4 loga_iomode" argument which indicates whether the 228 requested layout is for read-only use or read-write use. A read-only 229 layout may contain holes that are read as zero, whereas a read-write 230 layout will contain allocated, but un-initialized storage in those 231 holes (read as zero, can be written by client). This draft also 232 supports client participation in copy on write (e.g. for file systems 233 with snapshots) by providing both read-only and un-initialized 234 storage for the same range in a layout. Reads are initially 235 performed on the read-only storage, with writes going to the un- 236 initialized storage. After the first write that initializes the un- 237 initialized storage, all reads are performed to that now-initialized 238 writeable storage, and the corresponding read-only storage is no 239 longer used. 241 2.2. GETDEVICELIST and GETDEVICEINFO 243 2.2.1. Volume Identification 245 Storage Systems such as storage arrays can have multiple physical 246 network ports that need not be connected to a common network, 247 resulting in a pNFS client having simultaneous multipath access to 248 the same storage volumes via different ports on different networks. 249 The networks may not even be the same technology - for example, 250 access to the same volume via both iSCSI and Fibre Channel is 251 possible, hence network addresses are difficult to use for volume 252 identification. For this reason, this pNFS block layout identifies 253 storage volumes by content, for example providing the means to match 254 (unique portions of) labels used by volume managers. Any block pNFS 255 system using this layout MUST support a means of content-based unique 256 volume identification that can be employed via the data structure 257 given here. 259 ///struct pnfs_block_sig_component4 { /* disk signature component */ 260 /// int64_t bsc_sig_offset; /* byte offset of component 261 /// on volume*/ 262 /// opaque bsc_contents<>; /* contents of this component 263 /// of the signature */ 264 ///}; 265 /// 267 Note that the opaque "bsc_contents" field in the 268 "pnfs_block_sig_component4" structure MUST NOT be interpreted as a 269 zero-terminated string, as it may contain embedded zero-valued bytes. 270 There are no restrictions on alignment (e.g., neither bsc_sig_offset 271 nor the length are required to be multiples of 4). The 272 bsc_sig_offset is a signed quantity which when positive represents an 273 byte offset from the start of the volume, and when negative 274 represents an byte offset from the end of the volume. 276 Negative offsets are permitted in order to simplify the client 277 implementation on systems where the device label is found at a fixed 278 offset from the end of the volume. If the server uses negative 279 offsets to describe the signature, then the client and server MUST 280 NOT see different volume sizes. Negative offsets SHOULD NOT be used 281 in systems that dynamically resize volumes unless care is taken to 282 ensure that the device label is always present at the offset from the 283 end of the volume as seen by the clients. 285 A signature is an array of up to "PNFS_BLOCK_MAX_SIG_COMP" (defined 286 below) signature components. The client MUST NOT assume that all 287 signature components are colocated within a single sector on a block 288 device. 290 The pNFS client block layout driver uses this volume identification 291 to map pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE deviceid4s to 292 its local view of a LUN. 294 2.2.2. Volume Topology 296 The pNFS block server volume topology is expressed as an arbitrary 297 combination of base volume types enumerated in the following data 298 structures. The individual components of the topology are contained 299 in an array and components may refer to other components by using 300 array indices. 302 ///enum pnfs_block_volume_type4 { 303 /// PNFS_BLOCK_VOLUME_SIMPLE = 0, /* volume maps to a single 304 /// LU */ 305 /// PNFS_BLOCK_VOLUME_SLICE = 1, /* volume is a slice of 306 /// another volume */ 307 /// PNFS_BLOCK_VOLUME_CONCAT = 2, /* volume is a 308 /// concatenation of 309 /// multiple volumes */ 310 /// PNFS_BLOCK_VOLUME_STRIPE = 3 /* volume is striped across 311 /// multiple volumes */ 312 ///}; 313 /// 314 ///const PNFS_BLOCK_MAX_SIG_COMP = 16; /* maximum components per 315 /// signature */ 316 ///struct pnfs_block_simple_volume_info4 { 317 /// pnfs_block_sig_component4 bsv_ds; 318 /// /* disk signature */ 319 ///}; 320 /// 321 /// 322 ///struct pnfs_block_slice_volume_info4 { 323 /// offset4 bsv_start; /* offset of the start of the 324 /// slice in bytes */ 325 /// length4 bsv_length; /* length of slice in bytes */ 326 /// uint32_t bsv_volume; /* array index of sliced 327 /// volume */ 328 ///}; 329 /// 330 ///struct pnfs_block_concat_volume_info4 { 331 /// uint32_t bcv_volumes<>; /* array indices of volumes 332 /// which are concatenated */ 333 ///}; 334 /// 335 ///struct pnfs_block_stripe_volume_info4 { 336 /// length4 bsv_stripe_unit; /* size of stripe in bytes */ 337 /// uint32_t bsv_volumes<>; /* array indices of volumes 338 /// which are striped across -- 339 /// MUST be same size */ 340 ///}; 341 /// 342 ///union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) { 343 /// case PNFS_BLOCK_VOLUME_SIMPLE: 344 /// pnfs_block_simple_volume_info4 bv_simple_info; 345 /// case PNFS_BLOCK_VOLUME_SLICE: 346 /// pnfs_block_slice_volume_info4 bv_slice_info; 347 /// case PNFS_BLOCK_VOLUME_CONCAT: 348 /// pnfs_block_concat_volume_info4 bv_concat_info; 349 /// case PNFS_BLOCK_VOLUME_STRIPE: 350 /// pnfs_block_stripe_volume_info4 bv_stripe_info; 351 ///}; 352 /// 353 ////* block layout specific type for da_addr_body */ 354 ///struct pnfs_block_deviceaddr4 { 355 /// pnfs_block_volume4 bda_volumes<>; /* array of volumes */ 356 ///}; 357 /// 359 The "pnfs_block_deviceaddr4" data structure is a structure that 360 allows arbitrarily complex nested volume structures to be encoded. 361 The types of aggregations that are allowed are stripes, 362 concatenations, and slices. Note that the volume topology expressed 363 in the pnfs_block_deviceaddr4 data structure will always resolve to a 364 set of pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE. The array 365 of volumes is ordered such that the root of the volume hierarchy is 366 the last element of the array. Concat, slice and stripe volumes MUST 367 refer to volumes defined by lower indexed elements of the array. 369 The "pnfs_block_device_addr4" data structure is returned by the 370 server as the storage-protocol-specific opaque field da_addr_body in 371 the "device_addr4" structure by a successful GETDEVICEINFO operation. 372 [NFSV4.1]. 374 As noted above, all device_addr4 structures eventually resolve to a 375 set of volumes of type PNFS_BLOCK_VOLUME_SIMPLE. These volumes are 376 each uniquely identified by a set of signature components. 377 Complicated volume hierarchies may be composed of dozens of volumes 378 each with several signature components, thus the device address may 379 require several kilobytes. The client SHOULD be prepared to allocate 380 a large buffer to contain the result. In the case of the server 381 returning NFS4ERR_TOOSMALL the client SHOULD allocate a buffer of at 382 least gdir_mincount_bytes to contain the expected result and retry 383 the GETDEVICEINFO request. 385 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4 387 The server in response to a GETDEVICELIST request typically will 388 return a single "deviceid4" in the gdlr_deviceid_list array. This is 389 because the deviceid4 when passed to GETDEVICEINFO will return a 390 "device_addr4" which encodes the entire volume hierarchy. In the 391 case of copy-on-write file systems, the "gdlr_deviceid_list" array 392 may contain two deviceid4's, one referencing the read-only volume 393 hierarchy, and one referencing the writable volume hierarchy. There 394 is no required ordering of the readable and writable ids in the array 395 as the volumes are uniquely identified by their deviceid4, and are 396 referred to by layouts using the deviceid4. Another example of the 397 server returning multiple device items occurs when the file handle 398 represents the root of a name space spanning multiple physical file 399 systems on the server, each with a different volume hierarchy. In 400 this example a server implementation may return either a list of 401 deviceids used by each of the physical file systems, or it may return 402 an empty list. 404 Each deviceid4 returned by a successful GETDEVICELIST operation is a 405 shorthand id used to reference the whole volume topology. These 406 device ids, as well as device ids return in extents of a LAYOUTGET 407 operation, can be used as input to the GETDEVICEINFO operation. 408 Decoding the "pnfs_block_deviceaddr4" results in a flat ordering of 409 data blocks mapped to PNFS_BLOCK_VOLUME_SIMPLE volumes. Combined 410 with the mapping to a client LUN described in 2.2.1 Volume 411 Identification, a logical volume offset can be mapped to a block on a 412 pNFS client LUN. [NFSV4.1] 414 2.3. Data Structures: Extents and Extent Lists 416 A pNFS block layout is a list of extents within a flat array of data 417 blocks in a logical volume. The details of the volume topology can 418 be determined by using the GETDEVICEINFO operation (see discussion of 419 volume identification, section 2.2 above). The block layout 420 describes the individual block extents on the volume that make up the 421 file. The offsets and length contained in an extent are specified in 422 units of bytes. 424 ///enum pnfs_block_extent_state4 { 425 /// PNFS_BLOCK_READ_WRITE_DATA = 0, /* the data located by this 426 /// extent is valid 427 /// for reading and writing. */ 428 /// PNFS_BLOCK_READ_DATA = 1, /* the data located by this 429 /// extent is valid for reading 430 /// only; it may not be 431 /// written. */ 432 /// PNFS_BLOCK_INVALID_DATA = 2, /* the location is valid; the 433 /// data is invalid. It is a 434 /// newly (pre-) allocated 435 /// extent. There is physical 436 /// space on the volume. */ 437 /// PNFS_BLOCK_NONE_DATA = 3 /* the location is invalid. It 438 /// is a hole in the file. 439 /// There is no physical space 440 /// on the volume. */ 441 ///}; 443 /// 444 ///struct pnfs_block_extent4 { 445 /// deviceid4 bex_vol_id; /* id of logical volume on 446 /// which extent of file is 447 /// stored. */ 448 /// offset4 bex_file_offset; /* the starting byte offset in 449 /// the file */ 450 /// length4 bex_length; /* the size in bytes of the 451 /// extent */ 452 /// offset4 bex_storage_offset;/* the starting byte offset in 453 /// the volume */ 454 /// pnfs_block_extent_state4 bex_state; 455 /// /* the state of this extent */ 456 ///}; 457 /// 458 ////* block layout specific type for loc_body */ 459 ///struct pnfs_block_layout4 { 460 /// pnfs_block_extent4 blo_extents<>; 461 /// /* extents which make up this 462 /// layout. */ 463 ///}; 464 /// 466 The block layout consists of a list of extents which map the logical 467 regions of the file to physical locations on a volume. The 468 "bex_storage_offset" field within each extent identifies a location 469 on the logical volume specified by the "bex_vol_id" field in the 470 extent. The bex_vol_id itself is shorthand for the whole topology of 471 the logical volume on which the file is stored. The client is 472 responsible for translating this logical offset into an offset on the 473 appropriate underlying SAN logical unit. In most cases all extents 474 in a layout will reside on the same volume and thus have the same 475 bex_vol_id. In the case of copy on write file systems, the 476 PNFS_BLOCK_READ_DATA extents may have a different bex_vol_id from the 477 writable extents. 479 Each extent maps a logical region of the file onto a portion of the 480 specified logical volume. The bex_file_offset, bex_length, and 481 bex_state fields for an extent returned from the server are valid for 482 all extents. In contrast, the interpretation of the 483 bex_storage_offset field depends on the value of bex_state as follows 484 (in increasing order): 486 o PNFS_BLOCK_READ_WRITE_DATA means that bex_storage_offset is valid, 487 and points to valid/initialized data that can be read and written. 489 o PNFS_BLOCK_READ_DATA means that bex_storage_offset is valid and 490 points to valid/ initialized data which can only be read. Write 491 operations are prohibited; the client may need to request a read- 492 write layout. 494 o PNFS_BLOCK_INVALID_DATA means that bex_storage_offset is valid, 495 but points to invalid un-initialized data. This data must not be 496 physically read from the disk until it has been initialized. A 497 read request for a PNFS_BLOCK_INVALID_DATA extent must fill the 498 user buffer with zeros, unless the extent is covered by a 499 PNFS_BLOCK_READ_DATA extent of a copy-on-write file system. Write 500 requests must write whole server-sized blocks to the disk; bytes 501 not initialized by the user must be set to zero. Any write to 502 storage in a PNFS_BLOCK_INVALID_DATA extent changes the written 503 portion of the extent to PNFS_BLOCK_READ_WRITE_DATA; the pNFS 504 client is responsible for reporting this change via LAYOUTCOMMIT. 506 o PNFS_BLOCK_NONE_DATA means that bex_storage_offset is not valid, 507 and this extent may not be used to satisfy write requests. Read 508 requests may be satisfied by zero-filling as for 509 PNFS_BLOCK_INVALID_DATA. PNFS_BLOCK_NONE_DATA extents may be 510 returned by requests for readable extents; they are never returned 511 if the request was for a writeable extent. 513 An extent list lists all relevant extents in increasing order of the 514 bex_file_offset of each extent; any ties are broken by increasing 515 order of the extent state (bex_state). 517 2.3.1. Layout Requests and Extent Lists 519 Each request for a layout specifies at least three parameters: file 520 offset, desired size, and minimum size. If the status of a request 521 indicates success, the extent list returned must meet the following 522 criteria: 524 o A request for a readable (but not writeable) layout returns only 525 PNFS_BLOCK_READ_DATA or PNFS_BLOCK_NONE_DATA extents (but not 526 PNFS_BLOCK_INVALID_DATA or PNFS_BLOCK_READ_WRITE_DATA extents). 528 o A request for a writeable layout returns 529 PNFS_BLOCK_READ_WRITE_DATA or PNFS_BLOCK_INVALID_DATA extents (but 530 not PNFS_BLOCK_NONE_DATA extents). It may also return 531 PNFS_BLOCK_READ_DATA extents only when the offset ranges in those 532 extents are also covered by PNFS_BLOCK_INVALID_DATA extents to 533 permit writes. 535 o The first extent in the list MUST contain the requested starting 536 offset. 538 o The total size of extents within the requested range MUST cover at 539 least the minimum size. One exception is allowed: the total size 540 MAY be smaller if only readable extents were requested and EOF is 541 encountered. 543 o Extents in the extent list MUST be logically contiguous for a 544 read-only layout. For a read-write layout, the set of writable 545 extents (i.e., excluding PNFS_BLOCK_READ_DATA extents) MUST be 546 logically contiguous. Every PNFS_BLOCK_READ_DATA extent in a 547 read-write layout MUST be covered by one or more 548 PNFS_BLOCK_INVALID_DATA extents. This overlap of 549 PNFS_BLOCK_READ_DATA and PNFS_BLOCK_INVALID_DATA extents is the 550 only permitted extent overlap. 552 o Extents MUST be ordered in the list by starting offset, with 553 PNFS_BLOCK_READ_DATA extents preceding PNFS_BLOCK_INVALID_DATA 554 extents in the case of equal bex_file_offsets. 556 If the minimum requested size, loga_minlength, is zero, this is an 557 indication to the metadata server that the client desires any layout 558 at offset loga_offset or less that the metadata server has "readily 559 available". Readily is subjective, and depends on the layout type 560 and the pNFS server implementation. For block layout servers, 561 readily available SHOULD be interpreted such that readable layouts 562 are always available, even if some extents are in the 563 PNFS_BLOCK_NONE_DATA state. When processing requests for writable 564 layouts, a layout is readily available if extents can be returned in 565 the PNFS_BLOCK_READ_WRITE_DATA state. 567 2.3.2. Layout Commits 569 ////* block layout specific type for lou_body */ 570 ///struct pnfs_block_layoutupdate4 { 571 /// pnfs_block_extent4 blu_commit_list<>; 572 /// /* list of extents which 573 /// * now contain valid data. 574 /// */ 575 ///}; 576 /// 578 The "pnfs_block_layoutupdate4" structure is used by the client as the 579 block-protocol specific argument in a LAYOUTCOMMIT operation. The 580 "blu_commit_list" field is an extent list covering regions of the 581 file layout that were previously in the PNFS_BLOCK_INVALID_DATA 582 state, but have been written by the client and should now be 583 considered in the PNFS_BLOCK_READ_WRITE_DATA state. The bex_state 584 field of each extent in the blu_commit_list MUST be set to 585 PNFS_BLOCK_READ_WRITE_DATA. The extents in the commit list MUST be 586 disjoint and MUST be sorted by bex_file_offset. The 587 bex_storage_offset field is unused. Implementers should be aware 588 that a server may be unable to commit regions at a granularity 589 smaller than a file-system block (typically 4KB or 8KB). As noted 590 above, the block-size that the server uses is available as an NFSv4 591 attribute, and any extents included in the "blu_commit_list" MUST be 592 aligned to this granularity and have a size that is a multiple of 593 this granularity. If the client believes that its actions have moved 594 the end-of-file into the middle of a block being committed, the 595 client MUST write zeroes from the end-of-file to the end of that 596 block before committing the block. Failure to do so may result in 597 junk (uninitialized data) appearing in that area if the file is 598 subsequently extended by moving the end-of-file. 600 2.3.3. Layout Returns 602 The LAYOUTRETURN operation is done without any block layout specific 603 data. When the LAYOUTRETURN operation specifies a 604 LAYOUTRETURN4_FILE_return type, then the layoutreturn_file4 data 605 structure specifies the region of the file layout that is no longer 606 needed by the client. The opaque "lrf_body" field of the 607 "layoutreturn_file4" data structure MUST have length zero. A 608 LAYOUTRETURN operation represents an explicit release of resources by 609 the client, usually done for the purpose of avoiding unnecessary 610 CB_LAYOUTRECALL operations in the future. The client may return 611 disjoint regions of the file by using multiple LAYOUTRETURN 612 operations within a single COMPOUND operation. 614 Note that the block/volume layout supports unilateral layout 615 revocation. When a layout is unilaterally revoked by the server, 616 usually due to the client's lease time expiring, or a delegation 617 being recalled, or the client failing to return a layout in a timely 618 manner, it is important for the sake of correctness that any in- 619 flight I/Os that the client issued before the layout was revoked are 620 rejected at the storage. For the block/volume protocol, this is 621 possible by fencing a client with an expired layout timer from the 622 physical storage. Note, however, that the granularity of this 623 operation can only be at the host/logical-unit level. Thus, if one 624 of a client's layouts is unilaterally revoked by the server, it will 625 effectively render useless *all* of the client's layouts for files 626 located on the storage units comprising the logical volume. This may 627 render useless the client's layouts for files in other file systems. 629 2.3.4. Client Copy-on-Write Processing 631 Copy-on-write is a mechanism used to support file and/or file system 632 snapshots. When writing to unaligned regions, or to regions smaller 633 than a file system block, the writer must copy the portions of the 634 original file data to a new location on disk. This behavior can 635 either be implemented on the client or the server. The paragraphs 636 below describe how a pNFS block layout client implements access to a 637 file which requires copy-on-write semantics. 639 Distinguishing the PNFS_BLOCK_READ_WRITE_DATA and 640 PNFS_BLOCK_READ_DATA extent types in combination with the allowed 641 overlap of PNFS_BLOCK_READ_DATA extents with PNFS_BLOCK_INVALID_DATA 642 extents allows copy-on-write processing to be done by pNFS clients. 643 In classic NFS, this operation would be done by the server. Since 644 pNFS enables clients to do direct block access, it is useful for 645 clients to participate in copy-on-write operations. All block/volume 646 pNFS clients MUST support this copy-on-write processing. 648 When a client wishes to write data covered by a PNFS_BLOCK_READ_DATA 649 extent, it MUST have requested a writable layout from the server; 650 that layout will contain PNFS_BLOCK_INVALID_DATA extents to cover all 651 the data ranges of that layout's PNFS_BLOCK_READ_DATA extents. More 652 precisely, for any bex_file_offset range covered by one or more 653 PNFS_BLOCK_READ_DATA extents in a writable layout, the server MUST 654 include one or more PNFS_BLOCK_INVALID_DATA extents in the layout 655 that cover the same bex_file_offset range. When performing a write 656 to such an area of a layout, the client MUST effectively copy the 657 data from the PNFS_BLOCK_READ_DATA extent for any partial blocks of 658 bex_file_offset and range, merge in the changes to be written, and 659 write the result to the PNFS_BLOCK_INVALID_DATA extent for the blocks 660 for that bex_file_offset and range. That is, if entire blocks of 661 data are to be overwritten by an operation, the corresponding 662 PNFS_BLOCK_READ_DATA blocks need not be fetched, but any partial- 663 block writes must be merged with data fetched via 664 PNFS_BLOCK_READ_DATA extents before storing the result via 665 PNFS_BLOCK_INVALID_DATA extents. For the purposes of this 666 discussion, "entire blocks" and "partial blocks" refer to the 667 server's file-system block size. Storing of data in a 668 PNFS_BLOCK_INVALID_DATA extent converts the written portion of the 669 PNFS_BLOCK_INVALID_DATA extent to a PNFS_BLOCK_READ_WRITE_DATA 670 extent; all subsequent reads MUST be performed from this extent; the 671 corresponding portion of the PNFS_BLOCK_READ_DATA extent MUST NOT be 672 used after storing data in a PNFS_BLOCK_INVALID_DATA extent. If a 673 client writes only a portion of an extent, the extent may be split at 674 block aligned boundaries. 676 When a client wishes to write data to a PNFS_BLOCK_INVALID_DATA 677 extent that is not covered by a PNFS_BLOCK_READ_DATA extent, it MUST 678 treat this write identically to a write to a file not involved with 679 copy-on-write semantics. Thus, data must be written in at least 680 block size increments, aligned to multiples of block sized offsets, 681 and unwritten portions of blocks must be zero filled. 683 In the LAYOUTCOMMIT operation that normally sends updated layout 684 information back to the server, for writable data, some 685 PNFS_BLOCK_INVALID_DATA extents may be committed as 686 PNFS_BLOCK_READ_WRITE_DATA extents, signifying that the storage at 687 the corresponding bex_storage_offset values has been stored into and 688 is now to be considered as valid data to be read. 689 PNFS_BLOCK_READ_DATA extents are not committed to the server. For 690 extents that the client receives via LAYOUTGET as 691 PNFS_BLOCK_INVALID_DATA and returns via LAYOUTCOMMIT as 692 PNFS_BLOCK_READ_WRITE_DATA, the server will understand that the 693 PNFS_BLOCK_READ_DATA mapping for that extent is no longer valid or 694 necessary for that file. 696 2.3.5. Extents are Permissions 698 Layout extents returned to pNFS clients grant permission to read or 699 write; PNFS_BLOCK_READ_DATA and PNFS_BLOCK_NONE_DATA are read-only 700 (PNFS_BLOCK_NONE_DATA reads as zeroes), PNFS_BLOCK_READ_WRITE_DATA 701 and PNFS_BLOCK_INVALID_DATA are read/write, (PNFS_BLOCK_INVALID_DATA 702 reads as zeros, any write converts it to PNFS_BLOCK_READ_WRITE_DATA). 703 This is the only client means of obtaining permission to perform 704 direct I/O to storage devices; a pNFS client MUST NOT perform direct 705 I/O operations that are not permitted by an extent held by the 706 client. Client adherence to this rule places the pNFS server in 707 control of potentially conflicting storage device operations, 708 enabling the server to determine what does conflict and how to avoid 709 conflicts by granting and recalling extents to/from clients. 711 Block/volume class storage devices are not required to perform read 712 and write operations atomically. Overlapping concurrent read and 713 write operations to the same data may cause the read to return a 714 mixture of before-write and after-write data. Overlapping write 715 operations can be worse, as the result could be a mixture of data 716 from the two write operations; data corruption can occur if the 717 underlying storage is striped and the operations complete in 718 different orders on different stripes. A pNFS server can avoid these 719 conflicts by implementing a single writer XOR multiple readers 720 concurrency control policy when there are multiple clients who wish 721 to access the same data. This policy MUST be implemented when 722 storage devices do not provide atomicity for concurrent read/write 723 and write/write operations to the same data. 725 If a client makes a layout request that conflicts with an existing 726 layout delegation, the request will be rejected with the error 727 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 728 request after a short interval. During this interval the server 729 SHOULD recall the conflicting portion of the layout delegation from 730 the client that currently holds it. This reject-and-retry approach 731 does not prevent client starvation when there is contention for the 732 layout of a particular file. For this reason a pNFS server SHOULD 733 implement a mechanism to prevent starvation. One possibility is that 734 the server can maintain a queue of rejected layout requests. Each 735 new layout request can be checked to see if it conflicts with a 736 previous rejected request, and if so, the newer request can be 737 rejected. Once the original requesting client retries its request, 738 its entry in the rejected request queue can be cleared, or the entry 739 in the rejected request queue can be removed when it reaches a 740 certain age. 742 NFSv4 supports mandatory locks and share reservations. These are 743 mechanisms that clients can use to restrict the set of I/O operations 744 that are permissible to other clients. Since all I/O operations 745 ultimately arrive at the NFSv4 server for processing, the server is 746 in a position to enforce these restrictions. However, with pNFS 747 layouts, I/Os will be issued from the clients that hold the layouts 748 directly to the storage devices that host the data. These devices 749 have no knowledge of files, mandatory locks, or share reservations, 750 and are not in a position to enforce such restrictions. For this 751 reason the NFSv4 server MUST NOT grant layouts that conflict with 752 mandatory locks or share reservations. Further, if a conflicting 753 mandatory lock request or a conflicting open request arrives at the 754 server, the server MUST recall the part of the layout in conflict 755 with the request before granting the request. 757 2.3.6. End-of-file Processing 759 The end-of-file location can be changed in two ways: implicitly as 760 the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 761 or explicitly as the result of a SETATTR request. Typically, when a 762 file is truncated by an NFSv4 client via the SETATTR call, the server 763 frees any disk blocks belonging to the file which are beyond the new 764 end-of-file byte, and MUST write zeros to the portion of the new end- 765 of-file block beyond the new end-of-file byte. These actions render 766 any pNFS layouts which refer to the blocks that are freed or written 767 semantically invalid. Therefore, the server MUST recall from clients 768 the portions of any pNFS layouts which refer to blocks that will be 769 freed or written by the server before processing the truncate 770 request. These recalls may take time to complete; as explained in 771 [NFSv4.1], if the server cannot respond to the client SETATTR request 772 in a reasonable amount of time, it SHOULD reply to the client with 773 the error NFS4ERR_DELAY. 775 Blocks in the PNFS_BLOCK_INVALID_DATA state which lie beyond the new 776 end-of-file block present a special case. The server has reserved 777 these blocks for use by a pNFS client with a writable layout for the 778 file, but the client has yet to commit the blocks, and they are not 779 yet a part of the file mapping on disk. The server MAY free these 780 blocks while processing the SETATTR request. If so, the server MUST 781 recall any layouts from pNFS clients which refer to the blocks before 782 processing the truncate. If the server does not free the 783 PNFS_BLOCK_INVALID_DATA blocks while processing the SETATTR request, 784 it need not recall layouts which refer only to the PNFS_BLOCK_INVALID 785 DATA blocks. 787 When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 788 the current end-of-file, or extended explicitly by a SETATTR request, 789 the server need not recall any portions of any pNFS layouts. 791 2.3.7. Layout Hints 793 The SETATTR operation supports a layout hint attribute [NFSv4.1]. 794 When the client sets a layout hint (data type layouthint4) with a 795 layout type of LAYOUT4_BLOCK_VOLUME (the loh_type field), the 796 loh_body field contains a value of data type pnfs_block_layouthint4. 798 ////* block layout specific type for loh_body */ 799 ///struct pnfs_block_layouthint4 { 800 /// uint64_t blh_maximum_io_time; /* maximum i/o time in seconds 801 /// */ 802 ///}; 803 /// 805 The block layout client uses the layout hint data structure to 806 communicate to the server the maximum time that it may take an I/O to 807 execute on the client. Clients using block layouts MUST set the 808 layout hint attribute before using LAYOUTGET operations. 810 2.3.8. Client Fencing 812 The pNFS block protocol must handle situations in which a system 813 failure, typically a network connectivity issue, requires the server 814 to unilaterally revoke extents from one client in order to transfer 815 the extents to another client. The pNFS server implementation MUST 816 ensure that when resources are transferred to another client, they 817 are not used by the client originally owning them, and this must be 818 ensured against any possible combination of partitions and delays 819 among all of the participants to the protocol (server, storage and 820 client). Two approaches to guaranteeing this isolation are possible 821 and are discussed below. 823 One implementation choice for fencing the block client from the block 824 storage is the use of LUN (Logical Unit Number) masking or mapping at 825 the storage systems or storage area network to disable access by the 826 client to be isolated. This requires server access to a management 827 interface for the storage system and authorization to perform LUN 828 masking and management operations. For example, SMI-S [SMIS] 829 provides a means to discover and mask LUNs, including a means of 830 associating clients with the necessary World Wide Names or Initiator 831 names to be masked. 833 In the absence of support for LUN masking, the server has to rely on 834 the clients to implement a timed lease I/O fencing mechanism. 835 Because clients do not know if the server is using LUN masking, in 836 all cases the client MUST implement timed lease fencing. In timed 837 lease fencing we define two time periods, the first, "lease_time" is 838 the length of a lease as defined by the server's lease_time attribute 839 (see [NFSV4.1]), and the second, "blh_maximum_io_time" is the maximum 840 time it can take for a client I/O to the storage system to either 841 complete or fail; this value is often 30 seconds or 60 seconds, but 842 may be longer in some environments. If the maximum client I/O time 843 cannot be bounded, the client MUST use a value of all 1s as the 844 blh_maximum_io_time. 846 The client MUST use SETATTR with a layout hint of type 847 LAYOUT4_BLOCK_VOLUME to inform the server of its maximum I/O time 848 prior to issuing the first LAYOUTGET operation. The maximum io time 849 hint is a per client attribute, and as such the server SHOULD 850 maintain the value set by each client. A server which implements 851 fencing via LUN masking SHOULD accept any maximum io time value from 852 a client. A server which does not implement fencing may return an 853 error NFS4ERR_INVAL to the SETATTR operation. Such a server SHOULD 854 return NFS4ERR_INVAL when a client sends an unbounded maximum I/O 855 time (all 1s), or when the maximum I/O time is significantly greater 856 than that of other clients using block layouts with pNFS. 858 When a client receives the error NFS4ERR_INVAL in response to the 859 SETATTR operation for a layout hint, the client MUST NOT use the 860 LAYOUTGET operation. After responding with NFS4ERR_INVAL to the 861 SETATTR for layout hint, the server MUST return the error 862 NFS4ERR_LAYOUTUNAVAILABLE to all subsequent LAYOUTGET operations from 863 that client. Thus the server, by returning either NFS4ERR_INVAL or 864 NFS4_OK determines whether or not a client with a large, or an 865 unbounded maximum I/O time may use pNFS. 867 Using the lease time and the maximum i/o time values, we specify the 868 behavior of the client and server as follows. 870 When a client receives layout information via a LAYOUTGET operation, 871 those layouts are valid for at most "lease_time" seconds from when 872 the server granted them. A layout is renewed by any successful 873 SEQUEUNCE operation, or whenever a new stateid is created or updated 874 (see the section "Lease Renewal" of [NFSV4.1]). If the layout lease 875 is not renewed prior to expiration, the client MUST cease to use the 876 layout after "lease_time" seconds from when it either sent the 877 original LAYOUTGET command, or sent the last operation renewing the 878 lease. In other words, the client may not issue any I/O to blocks 879 specified by an expired layout. In the presence of large 880 communication delays between the client and server it is even 881 possible for the lease to expire prior to the server response 882 arriving at the client. In such a situation the client MUST NOT use 883 the expired layouts, and SHOULD revert to using standard NFSv41 READ 884 and WRITE operations. Furthermore, the client must be configured 885 such that I/O operations complete within the "blh_maximum_io_time" 886 even in the presence of multipath drivers that will retry I/Os via 887 multiple paths. 889 As stated in the section "Dealing with Lease Expiration on the 890 Client" of [NFSV4.1], if any SEQUENCE operation is successful, but 891 sr_status_flag has SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED, 892 SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED, or 893 SEQ4_STATUS_ADMIN_STATE_REVOKED set, the client MUST immediately 894 cease to use all layouts and device id to device address mappings 895 associated with the corresponding server. 897 In the absence of known two way communication between the client and 898 the server on the fore channel, the server must wait for at least the 899 time period "lease_time" plus "blh_maximum_io_time" before 900 transferring layouts from the original client to any other client. 901 The server, like the client, must take a conservative approach, and 902 start the lease expiration timer from the time that it received the 903 operation which last renewed the lease. 905 2.4. Crash Recovery Issues 907 When the server crashes while the client holds a writable layout, and 908 the client has written data to blocks covered by the layout, and the 909 blocks are still in the PNFS_BLOCK_INVALID_DATA state, the client has 910 two options for recovery. If the data that has been written to these 911 blocks is still cached by the client, the client can simply re-write 912 the data via NFSv4, once the server has come back online. However, 913 if the data is no longer in the client's cache, the client MUST NOT 914 attempt to source the data from the data servers. Instead, it should 915 attempt to commit the blocks in question to the server during the 916 server's recovery grace period, by sending a LAYOUTCOMMIT with the 917 "loca_reclaim" flag set to true. This process is described in detail 918 in [NFSv4.1] section 18.42.4. 920 2.5. Recalling resources: CB_RECALL_ANY 922 The server may decide that it cannot hold all of the state for 923 layouts without running out of resources. In such a case, it is free 924 to recall individual layouts using CB_LAYOUTRECALL to reduce the 925 load, or it may choose to request that the client return any layout. 927 The NFSv4.1 spec [NFSv4.1] defines the following types: 929 const RCA4_TYPE_MASK_BLK_LAYOUT = 4; 931 struct CB_RECALL_ANY4args { 932 uint32_t craa_objects_to_keep; 933 bitmap4 craa_type_mask; 934 }; 936 When the server sends a CB_RECALL_ANY request to a client specifying 937 the RCA4_TYPE_MASK_BLK_LAYOUT bit in craa_type_mask, the client 938 should immediately respond with NFS4_OK, and then asynchronously 939 return complete file layouts until the number of files with layouts 940 cached on the client is less than craa_object_to_keep. 942 2.6. Transient and Permanent Errors 944 The server may respond to LAYOUTGET with a variety of error statuses. 945 These errors can convey transient conditions or more permanent 946 conditions that are unlikely to be resolved soon. 948 The transient errors, NFS4ERR_RECALLCONFLICT and NFS4ERR_TRYLATER are 949 used to indicate that the server cannot immediately grant the layout 950 to the client. In the former case this is because the server has 951 recently issued a CB_LAYOUTRECALL to the requesting client, whereas 952 in the case of NFS4ERR_TRYLATER, the server cannot grant the request 953 possibly due to sharing conflicts with other clients. In either 954 case, a reasonable approach for the client is to wait several 955 milliseconds and retry the request. The client SHOULD track the 956 number of retries, and if forward progress is not made, the client 957 SHOULD send the READ or WRITE operation directly to the server. 959 The error NFS4ERR_LAYOUTUNAVAILABLE may be returned by the server if 960 layouts are not supported for the requested file or its containing 961 file system. The server may also return this error code if the 962 server is the progress of migrating the file from secondary storage, 963 or for any other reason which causes the server to be unable to 964 supply the layout. As a result of receiving 965 NFS4ERR_LAYOUTUNAVAILABLE, the client SHOULD send future READ and 966 WRITE requests directly to the server. It is expected that a client 967 will not cache the file's layoutunavailable state forever, particular 968 if the file is closed, and thus eventually, the client MAY reissue a 969 LAYOUTGET operation. 971 3. Security Considerations 973 Typically, SAN disk arrays and SAN protocols provide access control 974 mechanisms (e.g., logical unit number mapping and/or masking) which 975 operate at the granularity of individual hosts. The functionality 976 provided by such mechanisms makes it possible for the server to 977 "fence" individual client machines from certain physical disks---that 978 is to say, to prevent individual client machines from reading or 979 writing to certain physical disks. Finer-grained access control 980 methods are not generally available. For this reason, certain 981 security responsibilities are delegated to pNFS clients for 982 block/volume layouts. Block/volume storage systems generally control 983 access at a volume granularity, and hence pNFS clients have to be 984 trusted to only perform accesses allowed by the layout extents they 985 currently hold (e.g., and not access storage for files on which a 986 layout extent is not held). In general, the server will not be able 987 to prevent a client which holds a layout for a file from accessing 988 parts of the physical disk not covered by the layout. Similarly, the 989 server will not be able to prevent a client from accessing blocks 990 covered by a layout that it has already returned. This block-based 991 level of protection must be provided by the client software. 993 An alternative method of block/volume protocol use is for the storage 994 devices to export virtualized block addresses, which do reflect the 995 files to which blocks belong. These virtual block addresses are 996 exported to pNFS clients via layouts. This allows the storage device 997 to make appropriate access checks, while mapping virtual block 998 addresses to physical block addresses. In environments where the 999 security requirements are such that client-side protection from 1000 access to storage outside of the authorized layout extents is not 1001 sufficient, pNFS block/volume storage layouts SHOULD NOT be used 1002 unless the storage device is able to implement the appropriate access 1003 checks, via use of virtualized block addresses or other means. In 1004 contrast, an environment where client-side protection may suffice 1005 consists of co-located clients, server and storage systems in a 1006 datacenter with a physically isolated SAN under control of a single 1007 system administrator or small group of system administrators. 1009 This also has implications for some NFSv4 functionality outside pNFS. 1010 For instance, if a file is covered by a mandatory read-only lock, the 1011 server can ensure that only readable layouts for the file are granted 1012 to pNFS clients. However, it is up to each pNFS client to ensure 1013 that the readable layout is used only to service read requests, and 1014 not to allow writes to the existing parts of the file. Since 1015 block/volume storage systems are generally not capable of enforcing 1016 such file-based security, in environments where pNFS clients cannot 1017 be trusted to enforce such policies, pNFS block/volume storage 1018 layouts SHOULD NOT be used. 1020 Access to block/volume storage is logically at a lower layer of the 1021 I/O stack than NFSv4, and hence NFSv4 security is not directly 1022 applicable to protocols that access such storage directly. Depending 1023 on the protocol, some of the security mechanisms provided by NFSv4 1024 (e.g., encryption, cryptographic integrity) may not be available, or 1025 may be provided via different means. At one extreme, pNFS with 1026 block/volume storage can be used with storage access protocols (e.g., 1027 parallel SCSI) that provide essentially no security functionality. 1028 At the other extreme, pNFS may be used with storage protocols such as 1029 iSCSI that can provide significant security functionality. It is the 1030 responsibility of those administering and deploying pNFS with a 1031 block/volume storage access protocol to ensure that appropriate 1032 protection is provided to that protocol (physical security is a 1033 common means for protocols not based on IP). In environments where 1034 the security requirements for the storage protocol cannot be met, 1035 pNFS block/volume storage layouts SHOULD NOT be used. 1037 When security is available for a storage protocol, it is generally at 1038 a different granularity and with a different notion of identity than 1039 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 1040 initiator access to volumes). The responsibility for enforcing 1041 appropriate correspondences between these security layers is placed 1042 upon the pNFS client. As with the issues in the first paragraph of 1043 this section, in environments where the security requirements are 1044 such that client-side protection from access to storage outside of 1045 the layout is not sufficient, pNFS block/volume storage layouts 1046 SHOULD NOT be used. 1048 4. Conclusions 1050 This draft specifies the block/volume layout type for pNFS and 1051 associated functionality. 1053 5. IANA Considerations 1055 There are no IANA considerations in this document. All pNFS IANA 1056 Considerations are covered in [NFSV4.1]. 1058 6. Acknowledgments 1060 This draft draws extensively on the authors' familiarity with the 1061 mapping functionality and protocol in EMC's MPFS (previously named 1062 HighRoad) system [MPFS]. The protocol used by MPFS is called FMP 1063 (File Mapping Protocol); it is an add-on protocol that runs in 1064 parallel with file system protocols such as NFSv3 to provide pNFS- 1065 like functionality for block/volume storage. While drawing on FMP, 1066 the data structures and functional considerations in this draft 1067 differ in significant ways, based on lessons learned and the 1068 opportunity to take advantage of NFSv4 features such as COMPOUND 1069 operations. The design to support pNFS client participation in copy- 1070 on-write is based on text and ideas contributed by Craig Everhart. 1072 Andy Adamson, Ben Campbell, Richard Chandler, Benny Halevy, Fredric 1073 Isaman, and Mario Wurzl all helped to review drafts of this 1074 specification. 1076 7. References 1078 7.1. Normative References 1080 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1081 Requirement Levels", BCP 14, RFC 2119, March 1997. 1083 [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 1084 Version 1", draft-ietf-nfsv4-minorversion1-26.txt, Internet 1085 Draft, September 2008. 1087 [XDR] Eisler, M., "XDR: External Data Representation Standard", 1088 STD 67, RFC 4506, May 2006. 1090 7.2. Informative References 1092 [MPFS] EMC Corporation, "EMC Celerra Multi-Path File System", EMC 1093 Data Sheet, available at: 1094 http://www.emc.com/collateral/software/data-sheet/h2006-celerra-mpfs- 1095 mpfsi.pdf 1096 link checked 13 March 2008 1098 [SMIS] SNIA, "SNIA Storage Management Initiative Specification", 1099 version 1.0.2, available at: 1100 http://www.snia.org/tech_activities/standards/curr_standards/smi/SMI- 1101 S_Technical_Position_v1.0.3r1.pdf 1102 link checked 13 March 2008 1104 Authors' Addresses 1106 David L. Black 1107 EMC Corporation 1108 176 South Street 1109 Hopkinton, MA 01748 1111 Phone: +1 (508) 293-7953 1112 Email: black_david@emc.com 1114 Stephen Fridella 1115 EMC Corporation 1116 228 South Street 1117 Hopkinton, MA 01748 1119 Phone: +1 (508) 249-3528 1120 Email: fridella_stephen@emc.com 1122 Jason Glasgow 1123 Google 1124 5 Cambridge Center 1125 Cambridge, MA 02142 1127 Phone: +1 (617) 575 1599 1128 Email: jglasgow@aya.yale.edu 1130 Intellectual Property Statement 1132 The IETF takes no position regarding the validity or scope of any 1133 Intellectual Property Rights or other rights that might be claimed to 1134 pertain to the implementation or use of the technology described in 1135 this document or the extent to which any license under such rights 1136 might or might not be available; nor does it represent that it has 1137 made any independent effort to identify any such rights. Information 1138 on the procedures with respect to rights in RFC documents can be 1139 found in BCP 78 and BCP 79. 1141 Copies of IPR disclosures made to the IETF Secretariat and any 1142 assurances of licenses to be made available, or the result of an 1143 attempt made to obtain a general license or permission for the use of 1144 such proprietary rights by implementers or users of this 1145 specification can be obtained from the IETF on-line IPR repository at 1146 http://www.ietf.org/ipr. 1148 The IETF invites any interested party to bring to its attention any 1149 copyrights, patents or patent applications, or other proprietary 1150 rights that may cover technology that may be required to implement 1151 this standard. Please address the information to the IETF at ietf- 1152 ipr@ietf.org. 1154 Disclaimer of Validity 1156 This document and the information contained herein are provided on an 1157 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1158 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1159 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1160 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1161 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1162 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1164 Copyright Statement 1166 Copyright (C) The IETF Trust (2008). 1168 This document is subject to the rights, licenses and restrictions 1169 contained in BCP 78, and except as set forth therein, the authors 1170 retain all their rights. 1172 Acknowledgment 1174 Funding for the RFC Editor function is currently provided by the 1175 Internet Society.