idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1144. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1121. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1128. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1134. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 262 has weird spacing: '... opaque bsc_c...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (April 1, 2008) is 5863 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 Working Group D. Black 3 Internet Draft S. Fridella 4 Expires: October 2, 2008 J. Glasgow 5 Intended Status: Proposed Standard EMC Corporation 6 April 1, 2008 8 pNFS Block/Volume Layout 9 draft-ietf-nfsv4-pnfs-block-08.txt 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that 14 any applicable patent or other IPR claims of which he or she is 15 aware have been or will be disclosed, and any of which he or she 16 becomes aware will be disclosed, in accordance with Section 6 of 17 BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html 35 This Internet-Draft will expire in September 2008. 37 Abstract 39 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 40 file data on the storage used by the NFSv4 server. This ability to 41 bypass the server for data access can increase both performance and 42 parallelism, but requires additional client functionality for data 43 access, some of which is dependent on the class of storage used. The 44 main pNFS operations draft specifies storage-class-independent 45 extensions to NFS; this draft specifies the additional extensions 46 (primarily data structures) for use of pNFS with block and volume 47 based storage. 49 Conventions used in this document 51 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 52 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 53 document are to be interpreted as described in RFC-2119 [RFC2119]. 55 Table of Contents 57 1. Introduction...................................................3 58 1.1. General Definitions.......................................3 59 1.2. XDR Description of NFSv4.1 block layout...................4 60 2. Block Layout Description.......................................5 61 2.1. Background and Architecture...............................5 62 2.2. GETDEVICELIST and GETDEVICEINFO...........................6 63 2.2.1. Volume Identification................................6 64 2.2.2. Volume Topology......................................7 65 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4...........10 66 2.3. Data Structures: Extents and Extent Lists................10 67 2.3.1. Layout Requests and Extent Lists....................13 68 2.3.2. Layout Commits......................................14 69 2.3.3. Layout Returns......................................14 70 2.3.4. Client Copy-on-Write Processing.....................15 71 2.3.5. Extents are Permissions.............................16 72 2.3.6. End-of-file Processing..............................18 73 2.3.7. Layout Hints........................................18 74 2.3.8. Client Fencing......................................19 75 2.4. Crash Recovery Issues....................................21 76 2.5. Recalling resources: CB_RECALL_ANY.......................21 77 2.6. Transient and Permanent Errors...........................22 78 3. Security Considerations.......................................22 79 4. Conclusions...................................................24 80 5. IANA Considerations...........................................24 81 6. Acknowledgments...............................................24 82 7. References....................................................25 83 7.1. Normative References.....................................25 84 7.2. Informative References...................................25 85 Author's Addresses...............................................25 86 Intellectual Property Statement..................................26 87 Disclaimer of Validity...........................................26 88 Copyright Statement..............................................27 89 Acknowledgment...................................................27 91 1. Introduction 93 Figure 1 shows the overall architecture of a pNFS system: 95 +-----------+ 96 |+-----------+ +-----------+ 97 ||+-----------+ | | 98 ||| | NFSv4.1 + pNFS | | 99 +|| Clients |<------------------------------>| Server | 100 +| | | | 101 +-----------+ | | 102 ||| +-----------+ 103 ||| | 104 ||| | 105 ||| +-----------+ | 106 ||| |+-----------+ | 107 ||+----------------||+-----------+ | 108 |+-----------------||| | | 109 +------------------+|| Storage |------------+ 110 +| Systems | 111 +-----------+ 113 Figure 1 pNFS Architecture 115 The overall approach is that pNFS-enhanced clients obtain sufficient 116 information from the server to enable them to access the underlying 117 storage (on the Storage Systems) directly. See the pNFS portion of 118 [NFSV4.1] for more details. This draft is concerned with access from 119 pNFS clients to Storage Systems over storage protocols based on 120 blocks and volumes, such as the SCSI protocol family (e.g., parallel 121 SCSI, FCP for Fibre Channel, iSCSI, SAS, and FCoE). This class of 122 storage is referred to as block/volume storage. While the Server to 123 Storage System protocol is not of concern for interoperability here, 124 it will typically also be a block/volume protocol when clients use 125 block/ volume protocols. 127 1.1. General Definitions 129 The following definitions are provided for the purpose of providing 130 an appropriate context for the reader. 132 Byte 134 This document defines a byte as an octet, i.e. a datum exactly 8 135 bits in length. 137 Client 139 The "client" is the entity that accesses the NFS server's 140 resources. The client may be an application which contains the 141 logic to access the NFS server directly. The client may also be 142 the traditional operating system client that provides remote file 143 system services for a set of applications. 145 Server 147 The "Server" is the entity responsible for coordinating client 148 access to a set of file systems and is identified by a Server 149 owner. 151 1.2. XDR Description 153 This document contains the XDR ([XDR]) description of the NFSv4.1 154 block layout protocol. The XDR description is embedded in this 155 document in a way that makes it simple for the reader to extract into 156 a ready to compile form. The reader can feed this document into the 157 following shell script to produce the machine readable XDR 158 description of the NFSv4.1 block layout: 160 #!/bin/sh 161 grep "^ *///" | sed 's?^ *///??' 163 I.e. if the above script is stored in a file called "extract.sh", and 164 this document is in a file called "spec.txt", then the reader can do: 166 sh extract.sh < spec.txt > nfs4_block_layout_spec.x 168 The effect of the script is to remove both leading white space and a 169 sentinel sequence of "///" from each matching line. 171 The embedded XDR file header follows, with subsequent pieces embedded 172 throughout the document: 174 ////* 175 /// * This file was machine generated for 176 /// * draft-ietf-nfsv4-pnfs-block-07 177 /// * Last updated Tue Apr 1 15:57:06 EST 2008 178 /// */ 179 ////* 180 /// * Copyright (C) The IETF Trust (2007-2008) 181 /// * All Rights Reserved. 182 /// * 183 /// * Copyright (C) The Internet Society (1998-2006). 184 /// * All Rights Reserved. 185 /// */ 186 /// 187 ////* 188 /// * nfs4_block_layout_prot.x 189 /// */ 190 /// 191 ///%#include "nfsv41.h" 192 /// 194 The XDR code contained in this document depends on types from 195 nfsv41.x file. This includes both nfs types that end with a 4, such 196 as offset4, length4, etc, as well as more generic types such as 197 uint32_t and uint64_t. 199 2. Block Layout Description 201 2.1. Background and Architecture 203 The fundamental storage abstraction supported by block/volume storage 204 is a storage volume consisting of a sequential series of fixed size 205 blocks. This can be thought of as a logical disk; it may be realized 206 by the Storage System as a physical disk, a portion of a physical 207 disk or something more complex (e.g., concatenation, striping, RAID, 208 and combinations thereof) involving multiple physical disks or 209 portions thereof. 211 A pNFS layout for this block/volume class of storage is responsible 212 for mapping from an NFS file (or portion of a file) to the blocks of 213 storage volumes that contain the file. The blocks are expressed as 214 extents with 64 bit offsets and lengths using the existing NFSv4 215 offset4 and length4 types. Clients must be able to perform I/O to 216 the block extents without affecting additional areas of storage 217 (especially important for writes), therefore extents MUST be aligned 218 to 512-byte boundaries, and writable extents MUST be aligned to the 219 block size used by the NFSv4 server in managing the actual file 220 system (4 kilobytes and 8 kilobytes are common block sizes). This 221 block size is available as the NFSv4.1 layout_blksize attribute. 222 [NFSV4.1]. Readable extents SHOULD be aligned to the block size used 223 by the NFSv4 server, but in order to support legacy file systems with 224 fragments, alignment to 512 byte boundaries is acceptable. 226 The pNFS operation for requesting a layout (LAYOUTGET) includes the 227 "layoutiomode4 loga_iomode" argument which indicates whether the 228 requested layout is for read-only use or read-write use. A read-only 229 layout may contain holes that are read as zero, whereas a read-write 230 layout will contain allocated, but un-initialized storage in those 231 holes (read as zero, can be written by client). This draft also 232 supports client participation in copy on write (e.g. for file systems 233 with snapshots) by providing both read-only and un-initialized 234 storage for the same range in a layout. Reads are initially 235 performed on the read-only storage, with writes going to the un- 236 initialized storage. After the first write that initializes the un- 237 initialized storage, all reads are performed to that now-initialized 238 writeable storage, and the corresponding read-only storage is no 239 longer used. 241 2.2. GETDEVICELIST and GETDEVICEINFO 243 2.2.1. Volume Identification 245 Storage Systems such as storage arrays can have multiple physical 246 network ports that need not be connected to a common network, 247 resulting in a pNFS client having simultaneous multipath access to 248 the same storage volumes via different ports on different networks. 249 The networks may not even be the same technology - for example, 250 access to the same volume via both iSCSI and Fibre Channel is 251 possible, hence network addresses are difficult to use for volume 252 identification. For this reason, this pNFS block layout identifies 253 storage volumes by content, for example providing the means to match 254 (unique portions of) labels used by volume managers. Any block pNFS 255 system using this layout MUST support a means of content-based unique 256 volume identification that can be employed via the data structure 257 given here. 259 ///struct pnfs_block_sig_component4 { /* disk signature component */ 260 /// int64_t bsc_sig_offset; /* byte offset of component 261 /// on volume*/ 262 /// opaque bsc_contents<>; /* contents of this component 263 /// of the signature */ 264 ///}; 265 /// 267 Note that the opaque "bsc_contents" field in the 268 "pnfs_block_sig_component4" structure MUST NOT be interpreted as a 269 zero-terminated string, as it may contain embedded zero-valued bytes. 270 There are no restrictions on alignment (e.g., neither bsc_sig_offset 271 nor the length are required to be multiples of 4). The 272 bsc_sig_offset is a signed quantity which when positive represents an 273 byte offset from the start of the volume, and when negative 274 represents an byte offset from the end of the volume. 276 Negative offsets are permitted in order to simplify the client 277 implementation on systems where the device label is found at a fixed 278 offset from the end of the volume. If the server uses negative 279 offsets to describe the signature, then the client and server MUST 280 NOT see different volume sizes. Negative offsets SHOULD NOT be used 281 in systems that dynamically resize volumes unless care is taken to 282 ensure that the device label is always present at the offset from the 283 end of the volume as seen by the clients. 285 A signature is an array of up to "PNFS_BLOCK_MAX_SIG_COMP" (defined 286 below) signature components. The client MUST NOT assume that all 287 signature components are colocated within a single sector on a block 288 device. 290 The pNFS client block layout driver uses this volume identification 291 to map pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE deviceid4s to 292 its local view of a LUN. 294 2.2.2. Volume Topology 296 The pNFS block server volume topology is expressed as an arbitrary 297 combination of base volume types enumerated in the following data 298 structures. The individual components of the topology are contained 299 in an array and components may refer to other components by using 300 array indices. 302 ///enum pnfs_block_volume_type4 { 303 /// PNFS_BLOCK_VOLUME_SIMPLE = 0, /* volume maps to a single 304 /// LU */ 305 /// PNFS_BLOCK_VOLUME_SLICE = 1, /* volume is a slice of 306 /// another volume */ 307 /// PNFS_BLOCK_VOLUME_CONCAT = 2, /* volume is a 308 /// concatenation of 309 /// multiple volumes */ 310 /// PNFS_BLOCK_VOLUME_STRIPE = 3 /* volume is striped across 311 /// multiple volumes */ 312 ///}; 313 /// 314 ///const PNFS_BLOCK_MAX_SIG_COMP = 16; /* maximum components per 315 /// signature */ 316 ///struct pnfs_block_simple_volume_info4 { 317 /// pnfs_block_sig_component4 bsv_ds; 318 /// /* disk signature */ 319 ///}; 320 /// 321 /// 322 ///struct pnfs_block_slice_volume_info4 { 323 /// offset4 bsv_start; /* offset of the start of the 324 /// slice in bytes */ 325 /// length4 bsv_length; /* length of slice in bytes */ 326 /// uint32_t bsv_volume; /* array index of sliced 327 /// volume */ 328 ///}; 329 /// 330 ///struct pnfs_block_concat_volume_info4 { 331 /// uint32_t bcv_volumes<>; /* array indices of volumes 332 /// which are concatenated */ 333 ///}; 334 /// 335 ///struct pnfs_block_stripe_volume_info4 { 336 /// length4 bsv_stripe_unit; /* size of stripe in bytes */ 337 /// uint32_t bsv_volumes<>; /* array indices of volumes 338 /// which are striped across -- 339 /// MUST be same size */ 340 ///}; 341 /// 342 ///union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) { 343 /// case PNFS_BLOCK_VOLUME_SIMPLE: 344 /// pnfs_block_simple_volume_info4 bv_simple_info; 345 /// case PNFS_BLOCK_VOLUME_SLICE: 346 /// pnfs_block_slice_volume_info4 bv_slice_info; 347 /// case PNFS_BLOCK_VOLUME_CONCAT: 348 /// pnfs_block_concat_volume_info4 bv_concat_info; 349 /// case PNFS_BLOCK_VOLUME_STRIPE: 350 /// pnfs_block_stripe_volume_info4 bv_stripe_info; 351 ///}; 352 /// 353 ////* block layout specific type for da_addr_body */ 354 ///struct pnfs_block_deviceaddr4 { 355 /// pnfs_block_volume4 bda_volumes<>; /* array of volumes */ 356 ///}; 357 /// 359 The "pnfs_block_deviceaddr4" data structure is a structure that 360 allows arbitrarily complex nested volume structures to be encoded. 361 The types of aggregations that are allowed are stripes, 362 concatenations, and slices. Note that the volume topology expressed 363 in the pnfs_block_deviceaddr4 data structure will always resolve to a 364 set of pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE. The array 365 of volumes is ordered such that the root of the volume hierarchy is 366 the last element of the array. Concat, slice and stripe volumes MUST 367 refer to volumes defined by lower indexed elements of the array. 369 The "pnfs_block_device_addr4" data structure is returned by the 370 server as the storage-protocol-specific opaque field da_addr_body in 371 the "device_addr4" structure by a successful GETDEVICELIST operation. 372 [NFSV4.1]. 374 As noted above, all device_addr4 structures eventually resolve to a 375 set of volumes of type PNFS_BLOCK_VOLUME_SIMPLE. These volumes are 376 each uniquely identified by a set of signature components. 377 Complicated volume hierarchies may be composed of dozens of volumes 378 each with several signature components, thus the device address may 379 require several kilobytes. The client SHOULD be prepared to allocate 380 a large buffer to contain the result. In the case of the server 381 returning NFS4ERR_TOOSMALL the client SHOULD allocate a buffer of at 382 least gdir_mincount_bytes to contain the expected result and retry 383 the GETDEVICEINFO request. 385 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4 387 The server in response to a GETDEVICELIST request typically will 388 return a single "deviceid4" in the gdlr_deviceid_list array. This is 389 because the deviceid4 when passed to GETDEVICEINFO will return a 390 "device_addr4" which encodes the entire volume hierarchy. In the 391 case of copy-on-write file systems, the "gdlr_deviceid_list" array 392 may contain two deviceid4's, one referencing the read-only volume 393 hierarchy, and one referencing the writable volume hierarchy. There 394 is no required ordering of the readable and writable ids in the array 395 as the volumes are uniquely identified by their deviceid4, and are 396 referred to by layouts using the deviceid4. Another example of the 397 server returning multiple device items occurs when the file handle 398 represents the root of a name space spanning multiple physical file 399 systems on the server, each with a different volume hierarchy. In 400 this example a server implementation may return either a list of 401 deviceids used by each of the physical file systems, or it may return 402 an empty list. 404 Each deviceid4 returned by a successful GETDEVICELIST operation is a 405 shorthand id used to reference the whole volume topology. These 406 device ids, as well as device ids return in extents of a LAYOUTGET 407 operation, can be used as input to the GETDEVICEINFO operation. 408 Decoding the "pnfs_block_deviceaddr4" results in a flat ordering of 409 data blocks mapped to PNFS_BLOCK_VOLUME_SIMPLE volumes. Combined 410 with the mapping to a client LUN described in 2.2.1 Volume 411 Identification, a logical volume offset can be mapped to a block on a 412 pNFS client LUN. [NFSV4.1] 414 2.3. Data Structures: Extents and Extent Lists 416 A pNFS block layout is a list of extents within a flat array of data 417 blocks in a logical volume. The details of the volume topology can 418 be determined by using the GETDEVICEINFO operation (see discussion of 419 volume identification, section 2.2 above). The block layout 420 describes the individual block extents on the volume that make up the 421 file. The offsets and length contained in an extent are specified in 422 units of bytes. 424 ///enum pnfs_block_extent_state4 { 425 /// PNFS_BLOCK_READWRITE_DATA = 0, /* the data located by this 426 /// extent is valid 427 /// for reading and writing. */ 428 /// PNFS_BLOCK_READ_DATA = 1, /* the data located by this 429 /// extent is valid for reading 430 /// only; it may not be 431 /// written. */ 432 /// PNFS_BLOCK_INVALID_DATA = 2, /* the location is valid; the 433 /// data is invalid. It is a 434 /// newly (pre-) allocated 435 /// extent. There is physical 436 /// space on the volume. */ 437 /// PNFS_BLOCK_NONE_DATA = 3 /* the location is invalid. It 438 /// is a hole in the file. 439 /// There is no physical space 440 /// on the volume. */ 441 ///}; 443 /// 444 ///struct pnfs_block_extent4 { 445 /// deviceid4 bex_vol_id; /* id of logical volume on 446 /// which extent of file is 447 /// stored. */ 448 /// offset4 bex_file_offset; /* the starting byte offset in 449 /// the file */ 450 /// length4 bex_length; /* the size in bytes of the 451 /// extent */ 452 /// offset4 bex_storage_offset;/* the starting byte offset in 453 /// the volume */ 454 /// pnfs_block_extent_state4 bex_state; 455 /// /* the state of this extent */ 456 ///}; 457 /// 458 ////* block layout specific type for loc_body */ 459 ///struct pnfs_block_layout4 { 460 /// pnfs_block_extent4 blo_extents<>; 461 /// /* extents which make up this 462 /// layout. */ 463 ///}; 464 /// 466 The block layout consists of a list of extents which map the logical 467 regions of the file to physical locations on a volume. The 468 "bex_storage_offset" field within each extent identifies a location 469 on the logical volume specified by the "bex_vol_id" field in the 470 extent. The bex_vol_id itself is shorthand for the whole topology of 471 the logical volume on which the file is stored. The client is 472 responsible for translating this logical offset into an offset on the 473 appropriate underlying SAN logical unit. In most cases all extents 474 in a layout will reside on the same volume and thus have the same 475 bex_vol_id. In the case of copy on write file systems, the 476 PNFS_BLOCK_READ_DATA extents may have a different bex_vol_id from the 477 writable extents. 479 Each extent maps a logical region of the file onto a portion of the 480 specified logical volume. The bex_file_offset, bex_length, and 481 bex_state fields for an extent returned from the server are valid for 482 all extents. In contrast, the interpretation of the 483 bex_storage_offset field depends on the value of bex_state as follows 484 (in increasing order): 486 o PNFS_BLOCK_READ_WRITE_DATA means that bex_storage_offset is valid, 487 and points to valid/initialized data that can be read and written. 489 o PNFS_BLOCK_READ_DATA means that bex_storage_offset is valid and 490 points to valid/ initialized data which can only be read. Write 491 operations are prohibited; the client may need to request a read- 492 write layout. 494 o PNFS_BLOCK_INVALID_DATA means that bex_storage_offset is valid, 495 but points to invalid un-initialized data. This data must not be 496 physically read from the disk until it has been initialized. A 497 read request for a PNFS_BLOCK_INVALID_DATA extent must fill the 498 user buffer with zeros, unless the extent is covered by a 499 PNFS_BLOCK_READ_DATA extent of a copy-on-write file system. Write 500 requests must write whole server-sized blocks to the disk; bytes 501 not initialized by the user must be set to zero. Any write to 502 storage in a PNFS_BLOCK_INVALID_DATA extent changes the written 503 portion of the extent to PNFS_BLOCK_READ_WRITE_DATA; the pNFS 504 client is responsible for reporting this change via LAYOUTCOMMIT. 506 o PNFS_BLOCK_NONE_DATA means that bex_storage_offset is not valid, 507 and this extent may not be used to satisfy write requests. Read 508 requests may be satisfied by zero-filling as for 509 PNFS_BLOCK_INVALID_DATA. PNFS_BLOCK_NONE_DATA extents may be 510 returned by requests for readable extents; they are never returned 511 if the request was for a writeable extent. 513 An extent list lists all relevant extents in increasing order of the 514 bex_file_offset of each extent; any ties are broken by increasing 515 order of the extent state (bex_state). 517 2.3.1. Layout Requests and Extent Lists 519 Each request for a layout specifies at least three parameters: file 520 offset, desired size, and minimum size. If the status of a request 521 indicates success, the extent list returned must meet the following 522 criteria: 524 o A request for a readable (but not writeable) layout returns only 525 PNFS_BLOCK_READ_DATA or PNFS_BLOCK_NONE_DATA extents (but not 526 PNFS_BLOCK_INVALID_DATA or PNFS_BLOCK_READ_WRITE_DATA extents). 528 o A request for a writeable layout returns 529 PNFS_BLOCK_READ_WRITE_DATA or PNFS_BLOCK_INVALID_DATA extents (but 530 not PNFS_BLOCK_NONE_DATA extents). It may also return 531 PNFS_BLOCK_READ_DATA extents only when the offset ranges in those 532 extents are also covered by PNFS_BLOCK_INVALID_DATA extents to 533 permit writes. 535 o The first extent in the list MUST contain the requested starting 536 offset. 538 o The total size of extents within the requested range MUST cover at 539 least the minimum size. One exception is allowed: the total size 540 MAY be smaller if only readable extents were requested and EOF is 541 encountered. 543 o Extents in the extent list MUST be logically contiguous for a 544 read-only layout. For a read-write layout, the set of writable 545 extents (i.e., excluding PNFS_BLOCK_READ_DATA extents) MUST be 546 logically contiguous. Every PNFS_BLOCK_READ_DATA extent in a 547 read-write layout MUST be covered by one or more 548 PNFS_BLOCK_INVALID_DATA extents. This overlap of 549 PNFS_BLOCK_READ_DATA and PNFS_BLOCK_INVALID_DATA extents is the 550 only permitted extent overlap. 552 o Extents MUST be ordered in the list by starting offset, with 553 PNFS_BLOCK_READ_DATA extents preceding PNFS_BLOCK_INVALID_DATA 554 extents in the case of equal bex_file_offsets. 556 2.3.2. Layout Commits 558 ////* block layout specific type for lou_body */ 559 ///struct pnfs_block_layoutupdate4 { 560 /// pnfs_block_extent4 blu_commit_list<>; 561 /// /* list of extents which 562 /// * now contain valid data. 563 /// */ 564 ///}; 565 /// 567 The "pnfs_block_layoutupdate4" structure is used by the client as the 568 block-protocol specific argument in a LAYOUTCOMMIT operation. The 569 "blu_commit_list" field is an extent list covering regions of the 570 file layout that were previously in the PNFS_BLOCK_INVALID_DATA 571 state, but have been written by the client and should now be 572 considered in the PNFS_BLOCK_READ_WRITE_DATA state. The bex_state 573 field of each extent in the blu_commit_list MUST be set to 574 PNFS_BLOCK_READ_WRITE_DATA. The extents in the commit list MUST be 575 disjoint and MUST be sorted by bex_file_offset. The 576 bex_storage_offset field is unused. Implementers should be aware 577 that a server may be unable to commit regions at a granularity 578 smaller than a file-system block (typically 4KB or 8KB). As noted 579 above, the block-size that the server uses is available as an NFSv4 580 attribute, and any extents included in the "blu_commit_list" MUST be 581 aligned to this granularity and have a size that is a multiple of 582 this granularity. If the client believes that its actions have moved 583 the end-of-file into the middle of a block being committed, the 584 client MUST write zeroes from the end-of-file to the end of that 585 block before committing the block. Failure to do so may result in 586 junk (uninitialized data) appearing in that area if the file is 587 subsequently extended by moving the end-of-file. 589 2.3.3. Layout Returns 591 The LAYOUTRETURN operation is done without any block layout specific 592 data. When the LAYOUTRETURN operation specifies a 593 LAYOUTRETURN4_FILE_return type, then the layoutreturn_file4 data 594 structure specifies the region of the file layout that is no longer 595 needed by the client. The opaque "lrf_body" field of the 596 "layoutreturn_file4" data structure MUST have length zero. A 597 LAYOUTRETURN operation represents an explicit release of resources by 598 the client, usually done for the purpose of avoiding unnecessary 599 CB_LAYOUTRECALL operations in the future. The client may return 600 disjoint regions of the file by using multiple LAYOUTRETURN 601 operations within a single COMPOUND operation. 603 Note that the block/volume layout supports unilateral layout 604 revocation. When a layout is unilaterally revoked by the server, 605 usually due to the client's lease time expiring, or a delegation 606 being recalled, or the client failing to return a layout in a timely 607 manner, it is important for the sake of correctness that any in- 608 flight I/Os that the client issued before the layout was revoked are 609 rejected at the storage. For the block/volume protocol, this is 610 possible by fencing a client with an expired layout timer from the 611 physical storage. Note, however, that the granularity of this 612 operation can only be at the host/logical-unit level. Thus, if one 613 of a client's layouts is unilaterally revoked by the server, it will 614 effectively render useless *all* of the client's layouts for files 615 located on the storage units comprising the logical volume. This may 616 render useless the client's layouts for files in other file systems. 618 2.3.4. Client Copy-on-Write Processing 620 Copy-on-write is a mechanism used to support file and/or file system 621 snapshots. When writing to unaligned regions, or to regions smaller 622 than a file system block, the writer must copy the portions of the 623 original file data to a new location on disk. This behavior can 624 either be implemented on the client or the server. The paragraphs 625 below describe how a pNFS block layout client implements access to a 626 file which requires copy-on-write semantics. 628 Distinguishing the PNFS_BLOCK_READ_WRITE_DATA and 629 PNFS_BLOCK_READ_DATA extent types in combination with the allowed 630 overlap of PNFS_BLOCK_READ_DATA extents with PNFS_BLOCK_INVALID_DATA 631 extents allows copy-on-write processing to be done by pNFS clients. 632 In classic NFS, this operation would be done by the server. Since 633 pNFS enables clients to do direct block access, it is useful for 634 clients to participate in copy-on-write operations. All block/volume 635 pNFS clients MUST support this copy-on-write processing. 637 When a client wishes to write data covered by a PNFS_BLOCK_READ_DATA 638 extent, it MUST have requested a writable layout from the server; 639 that layout will contain PNFS_BLOCK_INVALID_DATA extents to cover all 640 the data ranges of that layout's PNFS_BLOCK_READ_DATA extents. More 641 precisely, for any bex_file_offset range covered by one or more 642 PNFS_BLOCK_READ_DATA extents in a writable layout, the server MUST 643 include one or more PNFS_BLOCK_INVALID_DATA extents in the layout 644 that cover the same bex_file_offset range. When performing a write 645 to such an area of a layout, the client MUST effectively copy the 646 data from the PNFS_BLOCK_READ_DATA extent for any partial blocks of 647 bex_file_offset and range, merge in the changes to be written, and 648 write the result to the PNFS_BLOCK_INVALID_DATA extent for the blocks 649 for that bex_file_offset and range. That is, if entire blocks of 650 data are to be overwritten by an operation, the corresponding 651 PNFS_BLOCK_READ_DATA blocks need not be fetched, but any partial- 652 block writes must be merged with data fetched via 653 PNFS_BLOCK_READ_DATA extents before storing the result via 654 PNFS_BLOCK_INVALID_DATA extents. For the purposes of this 655 discussion, "entire blocks" and "partial blocks" refer to the 656 server's file-system block size. Storing of data in a 657 PNFS_BLOCK_INVALID_DATA extent converts the written portion of the 658 PNFS_BLOCK_INVALID_DATA extent to a PNFS_BLOCK_READ_WRITE_DATA 659 extent; all subsequent reads MUST be performed from this extent; the 660 corresponding portion of the PNFS_BLOCK_READ_DATA extent MUST NOT be 661 used after storing data in a PNFS_BLOCK_INVALID_DATA extent. If a 662 client writes only a portion of an extent, the extent may be split at 663 block aligned boundaries. 665 When a client wishes to write data to a PNFS_BLOCK_INVALID_DATA 666 extent that is not covered by a PNFS_BLOCK_READ_DATA extent, it MUST 667 treat this write identically to a write to a file not involved with 668 copy-on-write semantics. Thus, data must be written in at least 669 block size increments, aligned to multiples of block sized offsets, 670 and unwritten portions of blocks must be zero filled. 672 In the LAYOUTCOMMIT operation that normally sends updated layout 673 information back to the server, for writable data, some 674 PNFS_BLOCK_INVALID_DATA extents may be committed as 675 PNFS_BLOCK_READ_WRITE_DATA extents, signifying that the storage at 676 the corresponding bex_storage_offset values has been stored into and 677 is now to be considered as valid data to be read. 678 PNFS_BLOCK_READ_DATA extents are not committed to the server. For 679 extents that the client receives via LAYOUTGET as 680 PNFS_BLOCK_INVALID_DATA and returns via LAYOUTCOMMIT as 681 PNFS_BLOCK_READ_WRITE_DATA, the server will understand that the 682 PNFS_BLOCK_READ_DATA mapping for that extent is no longer valid or 683 necessary for that file. 685 2.3.5. Extents are Permissions 687 Layout extents returned to pNFS clients grant permission to read or 688 write; PNFS_BLOCK_READ_DATA and PNFS_BLOCK_NONE_DATA are read-only 689 (PNFS_BLOCK_NONE_DATA reads as zeroes), PNFS_BLOCK_READ_WRITE_DATA 690 and PNFS_BLOCK_INVALID_DATA are read/write, (PNFS_BLOCK_INVALID_DATA 691 reads as zeros, any write converts it to PNFS_BLOCK_READ_WRITE_DATA). 693 This is the only client means of obtaining permission to perform 694 direct I/O to storage devices; a pNFS client MUST NOT perform direct 695 I/O operations that are not permitted by an extent held by the 696 client. Client adherence to this rule places the pNFS server in 697 control of potentially conflicting storage device operations, 698 enabling the server to determine what does conflict and how to avoid 699 conflicts by granting and recalling extents to/from clients. 701 Block/volume class storage devices are not required to perform read 702 and write operations atomically. Overlapping concurrent read and 703 write operations to the same data may cause the read to return a 704 mixture of before-write and after-write data. Overlapping write 705 operations can be worse, as the result could be a mixture of data 706 from the two write operations; data corruption can occur if the 707 underlying storage is striped and the operations complete in 708 different orders on different stripes. A pNFS server can avoid these 709 conflicts by implementing a single writer XOR multiple readers 710 concurrency control policy when there are multiple clients who wish 711 to access the same data. This policy SHOULD be implemented when 712 storage devices do not provide atomicity for concurrent read/write 713 and write/write operations to the same data. 715 If a client makes a layout request that conflicts with an existing 716 layout delegation, the request will be rejected with the error 717 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 718 request after a short interval. During this interval the server 719 SHOULD recall the conflicting portion of the layout delegation from 720 the client that currently holds it. This reject-and-retry approach 721 does not prevent client starvation when there is contention for the 722 layout of a particular file. For this reason a pNFS server SHOULD 723 implement a mechanism to prevent starvation. One possibility is that 724 the server can maintain a queue of rejected layout requests. Each 725 new layout request can be checked to see if it conflicts with a 726 previous rejected request, and if so, the newer request can be 727 rejected. Once the original requesting client retries its request, 728 its entry in the rejected request queue can be cleared, or the entry 729 in the rejected request queue can be removed when it reaches a 730 certain age. 732 NFSv4 supports mandatory locks and share reservations. These are 733 mechanisms that clients can use to restrict the set of I/O operations 734 that are permissible to other clients. Since all I/O operations 735 ultimately arrive at the NFSv4 server for processing, the server is 736 in a position to enforce these restrictions. However, with pNFS 737 layouts, I/Os will be issued from the clients that hold the layouts 738 directly to the storage devices that host the data. These devices 739 have no knowledge of files, mandatory locks, or share reservations, 740 and are not in a position to enforce such restrictions. For this 741 reason the NFSv4 server MUST NOT grant layouts that conflict with 742 mandatory locks or share reservations. Further, if a conflicting 743 mandatory lock request or a conflicting open request arrives at the 744 server, the server MUST recall the part of the layout in conflict 745 with the request before granting the request. 747 2.3.6. End-of-file Processing 749 The end-of-file location can be changed in two ways: implicitly as 750 the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 751 or explicitly as the result of a SETATTR request. Typically, when a 752 file is truncated by an NFSv4 client via the SETATTR call, the server 753 frees any disk blocks belonging to the file which are beyond the new 754 end-of-file byte, and MUST write zeros to the portion of the new end- 755 of-file block beyond the new end-of-file byte. These actions render 756 any pNFS layouts which refer to the blocks that are freed or written 757 semantically invalid. Therefore, the server MUST recall from clients 758 the portions of any pNFS layouts which refer to blocks that will be 759 freed or written by the server before processing the truncate 760 request. These recalls may take time to complete; as explained in 761 [NFSv4.1], if the server cannot respond to the client SETATTR request 762 in a reasonable amount of time, it SHOULD reply to the client with 763 the error NFS4ERR_DELAY. 765 Blocks in the PNFS_BLOCK_INVALID_DATA state which lie beyond the new 766 end-of-file block present a special case. The server has reserved 767 these blocks for use by a pNFS client with a writable layout for the 768 file, but the client has yet to commit the blocks, and they are not 769 yet a part of the file mapping on disk. The server MAY free these 770 blocks while processing the SETATTR request. If so, the server MUST 771 recall any layouts from pNFS clients which refer to the blocks before 772 processing the truncate. If the server does not free the 773 PNFS_BLOCK_INVALID_DATA blocks while processing the SETATTR request, 774 it need not recall layouts which refer only to the PNFS_BLOCK_INVALID 775 DATA blocks. 777 When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 778 the current end-of-file, or extended explicitly by a SETATTR request, 779 the server need not recall any portions of any pNFS layouts. 781 2.3.7. Layout Hints 783 The SETATTR operation supports a layout hint attribute [NFSv4.1]. 784 When the client sets a layout hint (data type layouthint4) with a 785 layout type of LAYOUT4_BLOCK_VOLUME (the loh_type field), the 786 loh_body field contains a value of data type pnfs_block_layouthint4. 788 ////* block layout specific type for loh_body */ 789 ///struct pnfs_block_layouthint4 { 790 /// uint64_t blh_maximum_io_time; /* maximum i/o time in seconds 791 /// */ 792 ///}; 793 /// 795 The block layout client uses the layout hint data structure to 796 communicate to the server the maximum time that it may take an I/O to 797 execute on the client. Clients using block layouts MUST set the 798 layout hint attribute before using LAYOUTGET operations. 800 2.3.8. Client Fencing 802 The pNFS block protocol must handle situations in which a system 803 failure, typically a network connectivity issue, requires the server 804 to unilaterally revoke extents from one client in order to transfer 805 the extents to another client. The pNFS server implementation MUST 806 ensure that when resources are transferred to another client, they 807 are not used by the client originally owning them, and this must be 808 ensured against any possible combination of partitions and delays 809 among all of the participants to the protocol (server, storage and 810 client). Two approaches to guaranteeing this isolation are possible 811 and are discussed below. 813 One implementation choice for fencing the block client from the block 814 storage is the use of LUN (Logical Unit Number) masking or mapping at 815 the storage systems or storage area network to disable access by the 816 client to be isolated. This requires server access to a management 817 interface for the storage system and authorization to perform LUN 818 masking and management operations. For example, SMI-S [SMIS] 819 provides a means to discover and mask LUNs, including a means of 820 associating clients with the necessary World Wide Names or Initiator 821 names to be masked. 823 In the absence of support for LUN masking, the server has to rely on 824 the clients to implement a timed lease I/O fencing mechanism. 825 Because clients do not know if the server is using LUN masking, in 826 all cases the client MUST implement timed lease fencing. In timed 827 lease fencing we define two time periods, the first, "lease_time" is 828 the length of a lease as defined by the server's lease_time attribute 829 (see [NFSV4.1]), and the second, "blh_maximum_io_time" is the maximum 830 time it can take for a client I/O to the storage system to either 831 complete or fail; this value is often 30 seconds or 60 seconds, but 832 may be longer in some environments. If the maximum client I/O time 833 cannot be bounded, the client MUST use a value of all 1s as the 834 blh_maximum_io_time. 836 The client MUST use SETATTR with a layout hint of type 837 LAYOUT4_BLOCK_VOLUME to inform the server of its maximum I/O time 838 prior to issuing the first LAYOUTGET operation. The maximum io 839 time hint is a per client attribute, and as such the server SHOULD 840 maintain the value set by each client. A server which implements 841 fencing via LUN masking SHOULD accept any maximum io time value from 842 a client. A server which does not implement fencing may return an 843 error NFS4ERR_INVAL to the SETATTR operation. Such a server SHOULD 844 return NFS4ERR_INVAL when a client sends an unbounded maximum I/O 845 time (all 1s), or when the maximum I/O time is significantly greater 846 than that of other clients using block layouts with pNFS. 848 When a client receives the error NFS4ERR_INVAL in response to the 849 SETATTR operation for a layout hint, the client MUST NOT use the 850 LAYOUTGET operation. After responding with NFS4ERR_INVAL to the 851 SETATTR for layout hint, the server MUST return the error 852 NFS4ERR_LAYOUTUNAVAILABLE to all subsequent LAYOUTGET operations from 853 that client. Thus the server, by returning either NFS4ERR_INVAL or 854 NFS4_OK determines whether or not a client with a large, or an 855 unbounded maximum I/O time may use pNFS. 857 Using the lease time and the maximum i/o time values, we specify the 858 behavior of the client and server as follows. 860 When a client receives layout information via a LAYOUTGET operation, 861 those layouts are valid for at most "lease_time" seconds from when 862 the server granted them. A layout is renewed by any successful 863 SEQUEUNCE operation, or whenever a new stateid is created or updated 864 (see the section "Lease Renewal" of [NFSV4.1]). If the layout lease 865 is not renewed prior to expiration, the client MUST cease to use the 866 layout after "lease_time" seconds from when it either sent the 867 original LAYOUTGET command, or sent the last operation renewing the 868 lease. In other words, the client may not issue any I/O to blocks 869 specified by an expired layout. In the presence of large 870 communication delays between the client and server it is even 871 possible for the lease to expire prior to the server response 872 arriving at the client. In such a situation the client MUST NOT use 873 the expired layouts, and SHOULD revert to using standard NFSv41 READ 874 and WRITE operations. Furthermore, the client must be configured 875 such that I/O operations complete within the "blh_maximum_io_time" 876 even in the presence of multipath drivers that will retry I/Os via 877 multiple paths. 879 As stated in the section "Dealing with Lease Expiration on the 880 Client" of [NFSV4.1], if any SEQUENCE operation is successful, but 881 sr_status_flag has SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED, 882 SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED, or 883 SEQ4_STATUS_ADMIN_STATE_REVOKED set, the client MUST immediately 884 cease to use all layouts and device id to device address mappings 885 associated with the corresponding server. 887 In the absence of known two way communication between the client and 888 the server on the fore channel, the server must wait for at least the 889 time period "lease_time" plus "blh_maximum_io_time" before 890 transferring layouts from the original client to any other client. 891 The server, like the client, must take a conservative approach, and 892 start the lease expiration timer from the time that it received the 893 operation which last renewed the lease. 895 2.4. Crash Recovery Issues 897 When the server crashes while the client holds a writable layout, and 898 the client has written data to blocks covered by the layout, and the 899 blocks are still in the PNFS_BLOCK_INVALID_DATA state, the client has 900 two options for recovery. If the data that has been written to these 901 blocks is still cached by the client, the client can simply re-write 902 the data via NFSv4, once the server has come back online. However, 903 if the data is no longer in the client's cache, the client MUST NOT 904 attempt to source the data from the data servers. Instead, it should 905 attempt to commit the blocks in question to the server during the 906 server's recovery grace period, by sending a LAYOUTCOMMIT with the 907 "loca_reclaim" flag set to true. This process is described in detail 908 in [NFSv4.1] section 18.42.4. 910 2.5. Recalling resources: CB_RECALL_ANY 912 The server may decide that it cannot hold all of the state for 913 layouts without running out of resources. In such a case, it is free 914 to recall individual layouts using CB_LAYOUTRECALL to reduce the 915 load, or it may choose to request that the client return any layout. 917 For the block layout we define the following bit 918 ///const RCA4_BLK_LAYOUT_RECALL_ANY_LAYOUTS = 4; 920 When the server sends a CB_RECALL_ANY request to a client specifying 921 the RCA4_BLK_LAYOUT_RECALL_ANY_LAYOUTS bit in craa_type_mask, the 922 client should immediately respond with NFS4_OK, and then 923 asynchronously return complete file layouts until the number of files 924 with layouts cached on the client is less the craa_object_to_keep. 926 The block layout does not currently use bits 5, 6 or 7. If any of 927 these bits are set, the client should return NFS4ERR_INVAL. 929 2.6. Transient and Permanent Errors 931 The server may respond to LAYOUTGET with a variety of error statuses. 932 These errors can convey transient conditions or more permanent 933 conditions that are unlikely to be resolved soon. 935 The transient errors, NFS4ERR_RECALLCONFLICT and NFS4ERR_TRYLATER are 936 used to indicate that the server cannot immediately grant the layout 937 to the client. In the former case this is because the server has 938 recently issued a CB_LAYOUTRECALL to the requesting client, whereas 939 in the case of NFS4ERR_TRYLATER, the server cannot grant the request 940 possibly due to sharing conflicts with other clients. In either 941 case, a reasonable approach for the client is to wait several 942 milliseconds and retry the request. The client SHOULD track the 943 number of retries, and if forward progress is not made, the client 944 SHOULD send the READ or WRITE operation directly to the server. 946 The error NFS4ERR_LAYOUTUNAVAILABLE may be returned by the server if 947 layouts are not supported for the requested file or its containing 948 file system. The server may also return this error code if the 949 server is the progress of migrating the file from secondary storage, 950 or for any other reason which causes the server to be unable to 951 supply the layout. As a result of receiving 952 NFS4ERR_LAYOUTUNAVAILABLE, the client SHOULD send future READ and 953 WRITE requests directly to the server. It is expected that a client 954 will not cache the file's layoutunavailable state forever, particular 955 if the file is closed, and thus eventually, the client MAY reissue a 956 LAYOUTGET operation. 958 3. Security Considerations 960 Typically, SAN disk arrays and SAN protocols provide access control 961 mechanisms (access-logics, lun masking, etc.) which operate at the 962 granularity of individual hosts. The functionality provided by such 963 mechanisms makes it possible for the server to "fence" individual 964 client machines from certain physical disks---that is to say, to 965 prevent individual client machines from reading or writing to certain 966 physical disks. Finer-grained access control methods are not 967 generally available. For this reason, certain security 968 responsibilities are delegated to pNFS clients for block/volume 969 layouts. Block/volume storage systems generally control access at a 970 volume granularity, and hence pNFS clients have to be trusted to only 971 perform accesses allowed by the layout extents they currently hold 972 (e.g., and not access storage for files on which a layout extent is 973 not held). In general, the server will not be able to prevent a 974 client which holds a layout for a file from accessing parts of the 975 physical disk not covered by the layout. Similarly, the server will 976 not be able to prevent a client from accessing blocks covered by a 977 layout that it has already returned. This block-based level of 978 protection must be provided by the client software. 980 An alternative method of block/volume protocol use is for the storage 981 devices to export virtualized block addresses, which do reflect the 982 files to which blocks belong. These virtual block addresses are 983 exported to pNFS clients via layouts. This allows the storage device 984 to make appropriate access checks, while mapping virtual block 985 addresses to physical block addresses. In environments where the 986 security requirements are such that client-side protection from 987 access to storage outside of the layout is not sufficient pNFS 988 block/volume storage layouts for pNFS SHOULD NOT be used, unless the 989 storage device is able to implement the appropriate access checks, 990 via use of virtualized block addresses, or other means. 992 This also has implications for some NFSv4 functionality outside pNFS. 993 For instance, if a file is covered by a mandatory read-only lock, the 994 server can ensure that only readable layouts for the file are granted 995 to pNFS clients. However, it is up to each pNFS client to ensure 996 that the readable layout is used only to service read requests, and 997 not to allow writes to the existing parts of the file. Since 998 block/volume storage systems are generally not capable of enforcing 999 such file-based security, in environments where pNFS clients cannot 1000 be trusted to enforce such policies, pNFS block/volume storage 1001 layouts SHOULD NOT be used. 1003 Access to block/volume storage is logically at a lower layer of the 1004 I/O stack than NFSv4, and hence NFSv4 security is not directly 1005 applicable to protocols that access such storage directly. Depending 1006 on the protocol, some of the security mechanisms provided by NFSv4 1007 (e.g., encryption, cryptographic integrity) may not be available, or 1008 may be provided via different means. At one extreme, pNFS with 1009 block/volume storage can be used with storage access protocols (e.g., 1010 parallel SCSI) that provide essentially no security functionality. 1011 At the other extreme, pNFS may be used with storage protocols such as 1012 iSCSI that provide significant functionality. It is the 1013 responsibility of those administering and deploying pNFS with a 1014 block/volume storage access protocol to ensure that appropriate 1015 protection is provided to that protocol (physical security is a 1016 common means for protocols not based on IP). In environments where 1017 the security requirements for the storage protocol cannot be met, 1018 pNFS block/volume storage layouts SHOULD NOT be used. 1020 When security is available for a storage protocol, it is generally at 1021 a different granularity and with a different notion of identity than 1022 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 1023 initiator access to volumes). The responsibility for enforcing 1024 appropriate correspondences between these security layers is placed 1025 upon the pNFS client. As with the issues in the first paragraph of 1026 this section, in environments where the security requirements are 1027 such that client-side protection from access to storage outside of 1028 the layout is not sufficient, pNFS block/volume storage layouts 1029 SHOULD NOT be used. 1031 4. Conclusions 1033 This draft specifies the block/volume layout type for pNFS and 1034 associated functionality. 1036 5. IANA Considerations 1038 There are no IANA considerations in this document. All pNFS IANA 1039 Considerations are covered in [NFSV4.1]. 1041 6. Acknowledgments 1043 This draft draws extensively on the authors' familiarity with the 1044 mapping functionality and protocol in EMC's MPFS (previously named 1045 HighRoad) system [MPFS]. The protocol used by MPFS is called FMP 1046 (File Mapping Protocol); it is an add-on protocol that runs in 1047 parallel with file system protocols such as NFSv3 to provide pNFS- 1048 like functionality for block/volume storage. While drawing on FMP, 1049 the data structures and functional considerations in this draft 1050 differ in significant ways, based on lessons learned and the 1051 opportunity to take advantage of NFSv4 features such as COMPOUND 1052 operations. The design to support pNFS client participation in copy- 1053 on-write is based on text and ideas contributed by Craig Everhart 1054 (formerly with IBM). 1056 Andy Adamson, Richard Chandler, Benny Halevy, Fredric Isaman, and 1057 Mario Wurzl all helped to review drafts of this specification. 1059 7. References 1061 7.1. Normative References 1063 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1064 Requirement Levels", BCP 14, RFC 2119, March 1997. 1066 [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 1067 Version 1", draft-ietf-nfsv4-minorversion1-14.txt, Internet 1068 Draft, July 2007. 1070 [XDR] Eisler, M., "XDR: External Data Representation Standard", 1071 STD 67, RFC 4506, May 2006. 1073 7.2. Informative References 1075 [MPFS] EMC Corporation, "EMC Celerra Multi-Path File System", EMC 1076 Data Sheet, available at: 1077 http://www.emc.com/collateral/software/data-sheet/h2006-celerra-mpfs- 1078 mpfsi.pdf 1079 link checked 13 March 2008 1081 [SMIS] SNIA, "SNIA Storage Management Initiative Specification", 1082 version 1.0.2, available at: 1083 http://www.snia.org/tech_activities/standards/curr_standards/smi/SMI- 1084 S_Technical_Position_v1.0.3r1.pdf 1085 link checked 13 March 2008 1087 Authors' Addresses 1089 David L. Black 1090 EMC Corporation 1091 176 South Street 1092 Hopkinton, MA 01748 1094 Phone: +1 (508) 293-7953 1095 Email: black_david@emc.com 1096 Stephen Fridella 1097 EMC Corporation 1098 228 South Street 1099 Hopkinton, MA 01748 1101 Phone: +1 (508) 249-3528 1102 Email: fridella_stephen@emc.com 1104 Jason Glasgow 1105 EMC Corporation 1106 32 Coslin Drive 1107 Southboro, MA 01772 1109 Phone: +1 (508) 305 8831 1110 Email: glasgow_jason@emc.com 1112 Intellectual Property Statement 1114 The IETF takes no position regarding the validity or scope of any 1115 Intellectual Property Rights or other rights that might be claimed to 1116 pertain to the implementation or use of the technology described in 1117 this document or the extent to which any license under such rights 1118 might or might not be available; nor does it represent that it has 1119 made any independent effort to identify any such rights. Information 1120 on the procedures with respect to rights in RFC documents can be 1121 found in BCP 78 and BCP 79. 1123 Copies of IPR disclosures made to the IETF Secretariat and any 1124 assurances of licenses to be made available, or the result of an 1125 attempt made to obtain a general license or permission for the use of 1126 such proprietary rights by implementers or users of this 1127 specification can be obtained from the IETF on-line IPR repository at 1128 http://www.ietf.org/ipr. 1130 The IETF invites any interested party to bring to its attention any 1131 copyrights, patents or patent applications, or other proprietary 1132 rights that may cover technology that may be required to implement 1133 this standard. Please address the information to the IETF at ietf- 1134 ipr@ietf.org. 1136 Disclaimer of Validity 1138 This document and the information contained herein are provided on an 1139 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1140 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1141 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1142 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1143 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1144 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1146 Copyright Statement 1148 Copyright (C) The IETF Trust (2008). 1150 This document is subject to the rights, licenses and restrictions 1151 contained in BCP 78, and except as set forth therein, the authors 1152 retain all their rights. 1154 Acknowledgment 1156 Funding for the RFC Editor function is currently provided by the 1157 Internet Society.