idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? -- It seems you're using the 'non-IETF stream' Licence Notice instead Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 340 has weird spacing: '... opaque bsc_c...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 22, 2008) is 5598 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. 'LEGAL' Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 Working Group D. Black 3 Internet Draft EMC Corporation 4 Expires: June 25, 2009 S. Fridella 5 Intended Status: Proposed Standard EMC Corporation 6 J. Glasgow 7 Google 8 December 22, 2008 10 pNFS Block/Volume Layout 11 draft-ietf-nfsv4-pnfs-block-12.txt 13 Status of this Memo 15 This Internet-Draft is submitted to IETF in full conformance with the 16 provisions of BCP 78 and BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html 34 This Internet-Draft will expire on June 25, 2009. 36 Copyright Notice 38 Copyright (c) 2008 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (http://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. 48 Abstract 50 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 51 file data on the storage used by the NFSv4 server. This ability to 52 bypass the server for data access can increase both performance and 53 parallelism, but requires additional client functionality for data 54 access, some of which is dependent on the class of storage used. The 55 main pNFS operations draft specifies storage-class-independent 56 extensions to NFS; this draft specifies the additional extensions 57 (primarily data structures) for use of pNFS with block and volume 58 based storage. 60 Conventions used in this document 62 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 63 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 64 document are to be interpreted as described in RFC-2119 [RFC2119]. 66 Table of Contents 68 Copyright Notice..................................................1 69 1. Introduction...................................................4 70 1.1. General Definitions.......................................4 71 1.2. Code Components Licensing Notice..........................5 72 1.3. XDR Description...........................................5 73 2. Block Layout Description.......................................8 74 2.1. Background and Architecture...............................8 75 2.2. GETDEVICELIST and GETDEVICEINFO...........................9 76 2.2.1. Volume Identification................................9 77 2.2.2. Volume Topology.....................................11 78 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4...........14 79 2.3. Data Structures: Extents and Extent Lists................14 80 2.3.1. Layout Requests and Extent Lists....................17 81 2.3.2. Layout Commits......................................18 82 2.3.3. Layout Returns......................................19 83 2.3.4. Client Copy-on-Write Processing.....................19 84 2.3.5. Extents are Permissions.............................21 85 2.3.6. End-of-file Processing..............................22 86 2.3.7. Layout Hints........................................23 87 2.3.8. Client Fencing......................................23 88 2.4. Crash Recovery Issues....................................25 89 2.5. Recalling resources: CB_RECALL_ANY.......................26 90 2.6. Transient and Permanent Errors...........................26 91 3. Security Considerations.......................................27 92 4. Conclusions...................................................29 93 5. IANA Considerations...........................................29 94 6. Acknowledgments...............................................29 95 7. References....................................................29 96 7.1. Normative References.....................................29 97 7.2. Informative References...................................30 98 Authors' Addresses...............................................30 100 1. Introduction 102 Figure 1 shows the overall architecture of a Parallel NFS (pNFS) 103 system: 105 +-----------+ 106 |+-----------+ +-----------+ 107 ||+-----------+ | | 108 ||| | NFSv4.1 + pNFS | | 109 +|| Clients |<------------------------------>| Server | 110 +| | | | 111 +-----------+ | | 112 ||| +-----------+ 113 ||| | 114 ||| | 115 ||| Storage +-----------+ | 116 ||| Protocol |+-----------+ | 117 ||+----------------||+-----------+ Control | 118 |+-----------------||| | Protocol| 119 +------------------+|| Storage |------------+ 120 +| Systems | 121 +-----------+ 123 Figure 1 pNFS Architecture 125 The overall approach is that pNFS-enhanced clients obtain sufficient 126 information from the server to enable them to access the underlying 127 storage (on the Storage Systems) directly. See the pNFS portion of 128 [NFSV4.1] for more details. This draft is concerned with access from 129 pNFS clients to Storage Systems over storage protocols based on 130 blocks and volumes, such as the SCSI protocol family (e.g., parallel 131 SCSI, FCP for Fibre Channel, iSCSI, SAS, and FCoE). This class of 132 storage is referred to as block/volume storage. While the Server to 133 Storage System protocol, called the "Control Protocol", is not of 134 concern for interoperability here, it will typically also be a 135 block/volume protocol when clients use block/ volume protocols. 137 1.1. General Definitions 139 The following definitions are provided for the purpose of providing 140 an appropriate context for the reader. 142 Byte 143 This document defines a byte as an octet, i.e. a datum exactly 8 144 bits in length. 146 Client 148 The "client" is the entity that accesses the NFS server's 149 resources. The client may be an application which contains the 150 logic to access the NFS server directly. The client may also be 151 the traditional operating system client that provides remote file 152 system services for a set of applications. 154 Server 156 The "Server" is the entity responsible for coordinating client 157 access to a set of file systems and is identified by a Server 158 owner. 160 1.2. Code Components Licensing Notice 162 The external data representation (XDR) description and scripts for 163 extracting the XDR description are Code Components as described in 164 Section 4 of "Legal Provisions Relating to IETF Documents" [LEGAL]. 165 These Code Components are licensed according to the terms of Section 166 4 of "Legal Provisions Relating to IETF Documents". 168 1.3. XDR Description 170 This document contains the XDR ([XDR]) description of the NFSv4.1 171 block layout protocol. The XDR description is embedded in this 172 document in a way that makes it simple for the reader to extract into 173 a ready to compile form. The reader can feed this document into the 174 following shell script to produce the machine readable XDR 175 description of the NFSv4.1 block layout: 177 #!/bin/sh 178 grep '^ *///' $* | sed 's?^ */// ??' | sed 's?^ *///$??' 180 I.e. if the above script is stored in a file called "extract.sh", and 181 this document is in a file called "spec.txt", then the reader can do: 183 sh extract.sh < spec.txt > nfs4_block_layout_spec.x 185 The effect of the script is to remove both leading white space and a 186 sentinel sequence of "///" from each matching line. 188 The embedded XDR file header follows, with subsequent pieces embedded 189 throughout the document: 191 /// /* 192 /// * This code was derived from IETF RFC &rfc.number. 193 [[RFC Editor: please insert RFC number if needed]] 194 /// * Please reproduce this note if possible. 195 /// */ 196 /// /* 197 /// * Copyright (c) 2008 IETF Trust and the persons identified 198 /// * as the document authors. All rights reserved. 199 /// * 200 /// * Redistribution and use in source and binary forms, with 201 /// * or without modification, are permitted provided that the 202 /// * following conditions are met: 203 /// * 204 /// * o Redistributions of source code must retain the above 205 /// * copyright notice, this list of conditions and the 206 /// * following disclaimer. 207 /// * 208 /// * o Redistributions in binary form must reproduce the above 209 /// * copyright notice, this list of conditions and the 210 /// * following disclaimer in the documentation and/or other 211 /// * materials provided with the distribution. 212 /// * 213 /// * o Neither the name of Internet Society, IETF or IETF 214 /// * Trust, nor the names of specific contributors, may be 215 /// * used to endorse or promote products derived from this 216 /// * software without specific prior written permission. 217 /// * 218 /// * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS 219 /// * AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED 220 /// * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 221 /// * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS 222 /// * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO 223 /// * EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 224 /// * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 225 /// * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 226 /// * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 227 /// * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 228 /// * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 229 /// * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 230 /// * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING 231 /// * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF 232 /// * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 233 /// */ 234 /// 235 /// /* 236 /// * nfs4_block_layout_prot.x 237 /// */ 238 /// 239 /// %#include "nfsv41.h" 240 /// 242 The XDR code contained in this document depends on types from 243 nfsv41.x file. This includes both nfs types that end with a 4, such 244 as offset4, length4, etc, as well as more generic types such as 245 uint32_t and uint64_t. 247 2. Block Layout Description 249 2.1. Background and Architecture 251 The fundamental storage abstraction supported by block/volume storage 252 is a storage volume consisting of a sequential series of fixed size 253 blocks. This can be thought of as a logical disk; it may be realized 254 by the Storage System as a physical disk, a portion of a physical 255 disk or something more complex (e.g., concatenation, striping, RAID, 256 and combinations thereof) involving multiple physical disks or 257 portions thereof. 259 A pNFS layout for this block/volume class of storage is responsible 260 for mapping from an NFS file (or portion of a file) to the blocks of 261 storage volumes that contain the file. The blocks are expressed as 262 extents with 64 bit offsets and lengths using the existing NFSv4 263 offset4 and length4 types. Clients must be able to perform I/O to 264 the block extents without affecting additional areas of storage 265 (especially important for writes), therefore extents MUST be aligned 266 to 512-byte boundaries, and writable extents MUST be aligned to the 267 block size used by the NFSv4 server in managing the actual file 268 system (4 kilobytes and 8 kilobytes are common block sizes). This 269 block size is available as the NFSv4.1 layout_blksize attribute. 270 [NFSV4.1]. Readable extents SHOULD be aligned to the block size used 271 by the NFSv4 server, but in order to support legacy file systems with 272 fragments, alignment to 512 byte boundaries is acceptable. 274 The pNFS operation for requesting a layout (LAYOUTGET) includes the 275 "layoutiomode4 loga_iomode" argument which indicates whether the 276 requested layout is for read-only use or read-write use. A read-only 277 layout may contain holes that are read as zero, whereas a read-write 278 layout will contain allocated, but un-initialized storage in those 279 holes (read as zero, can be written by client). This draft also 280 supports client participation in copy on write (e.g. for file systems 281 with snapshots) by providing both read-only and un-initialized 282 storage for the same range in a layout. Reads are initially 283 performed on the read-only storage, with writes going to the un- 284 initialized storage. After the first write that initializes the un- 285 initialized storage, all reads are performed to that now-initialized 286 writeable storage, and the corresponding read-only storage is no 287 longer used. 289 The block/volume layout solution expands the security 290 responsibilities of the pNFS clients and there are a number of 291 environments where the mandatory to implement security properties for 292 NFS cannot be satisfied. The additional security responsibilities of 293 the client follow, and a full discussion is present in Section 3 294 "Security Considerations". 296 o Typically, storage area network (SAN) disk arrays and SAN 297 protocols provide access control mechanisms (e.g., logical unit 298 number mapping and/or masking) which operate at the granularity of 299 individual hosts, not individual blocks. For this reason, block- 300 based protection must be provided by the client software. 302 o Similarly, SAN disk arrays and SAN protocols typically are not be 303 able to validate NFS locks that apply to file regions. For 304 instance, if a file is covered by a mandatory read-only lock, the 305 server can ensure that only readable layouts for the file are 306 granted to pNFS clients. However, it is up to each pNFS client to 307 ensure that the readable layout is used only to service read 308 requests, and not to allow writes to the existing parts of the 309 file. 311 Since block/volume storage systems are generally not capable of 312 enforcing such file-based security, in environments where pNFS 313 clients cannot be trusted to enforce such policies, pNFS block/volume 314 storage layouts SHOULD NOT be used. 316 2.2. GETDEVICELIST and GETDEVICEINFO 318 2.2.1. Volume Identification 320 Storage Systems such as storage arrays can have multiple physical 321 network ports that need not be connected to a common network, 322 resulting in a pNFS client having simultaneous multipath access to 323 the same storage volumes via different ports on different networks. 325 The networks may not even be the same technology - for example, 326 access to the same volume via both iSCSI and Fibre Channel is 327 possible, hence network addresses are difficult to use for volume 328 identification. For this reason, this pNFS block layout identifies 329 storage volumes by content, for example providing the means to match 330 (unique portions of) labels used by volume managers. Volume 331 identification is performed by matching one or more opaque byte 332 sequences to specific parts of the stored data. Any block pNFS 333 system using this layout MUST support a means of content-based unique 334 volume identification that can be employed via the data structure 335 given here. 337 /// struct pnfs_block_sig_component4 { /* disk signature component */ 338 /// int64_t bsc_sig_offset; /* byte offset of component 339 /// on volume*/ 340 /// opaque bsc_contents<>; /* contents of this component 341 /// of the signature */ 342 /// }; 343 /// 345 Note that the opaque "bsc_contents" field in the 346 "pnfs_block_sig_component4" structure MUST NOT be interpreted as a 347 zero-terminated string, as it may contain embedded zero-valued bytes. 348 There are no restrictions on alignment (e.g., neither bsc_sig_offset 349 nor the length are required to be multiples of 4). The 350 bsc_sig_offset is a signed quantity which when positive represents an 351 byte offset from the start of the volume, and when negative 352 represents an byte offset from the end of the volume. 354 Negative offsets are permitted in order to simplify the client 355 implementation on systems where the device label is found at a fixed 356 offset from the end of the volume. If the server uses negative 357 offsets to describe the signature, then the client and server MUST 358 NOT see different volume sizes. Negative offsets SHOULD NOT be used 359 in systems that dynamically resize volumes unless care is taken to 360 ensure that the device label is always present at the offset from the 361 end of the volume as seen by the clients. 363 A signature is an array of up to "PNFS_BLOCK_MAX_SIG_COMP" (defined 364 below) signature components. The client MUST NOT assume that all 365 signature components are colocated within a single sector on a block 366 device. 368 The pNFS client block layout driver uses this volume identification 369 to map pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE deviceid4s to 370 its local view of a logical unit number (LUN). 372 2.2.2. Volume Topology 374 The pNFS block server volume topology is expressed as an arbitrary 375 combination of base volume types enumerated in the following data 376 structures. The individual components of the topology are contained 377 in an array and components may refer to other components by using 378 array indices. 380 /// enum pnfs_block_volume_type4 { 381 /// PNFS_BLOCK_VOLUME_SIMPLE = 0, /* volume maps to a single 382 /// LU */ 383 /// PNFS_BLOCK_VOLUME_SLICE = 1, /* volume is a slice of 384 /// another volume */ 385 /// PNFS_BLOCK_VOLUME_CONCAT = 2, /* volume is a 386 /// concatenation of 387 /// multiple volumes */ 388 /// PNFS_BLOCK_VOLUME_STRIPE = 3 /* volume is striped across 389 /// multiple volumes */ 390 /// }; 391 /// 392 /// const PNFS_BLOCK_MAX_SIG_COMP = 16;/* maximum components per 393 /// signature */ 394 /// struct pnfs_block_simple_volume_info4 { 395 /// pnfs_block_sig_component4 bsv_ds; 396 /// /* disk signature */ 397 /// }; 398 /// 399 /// 400 /// struct pnfs_block_slice_volume_info4 { 401 /// offset4 bsv_start; /* offset of the start of the 402 /// slice in bytes */ 403 /// length4 bsv_length; /* length of slice in bytes */ 404 /// uint32_t bsv_volume; /* array index of sliced 405 /// volume */ 406 /// }; 407 /// 408 /// struct pnfs_block_concat_volume_info4 { 409 /// uint32_t bcv_volumes<>; /* array indices of volumes 410 /// which are concatenated */ 411 /// }; 412 /// 413 /// struct pnfs_block_stripe_volume_info4 { 414 /// length4 bsv_stripe_unit; /* size of stripe in bytes */ 415 /// uint32_t bsv_volumes<>; /* array indices of volumes 416 /// which are striped across -- 417 /// MUST be same size */ 418 /// }; 419 /// 420 /// union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) { 421 /// case PNFS_BLOCK_VOLUME_SIMPLE: 422 /// pnfs_block_simple_volume_info4 bv_simple_info; 423 /// case PNFS_BLOCK_VOLUME_SLICE: 424 /// pnfs_block_slice_volume_info4 bv_slice_info; 425 /// case PNFS_BLOCK_VOLUME_CONCAT: 426 /// pnfs_block_concat_volume_info4 bv_concat_info; 427 /// case PNFS_BLOCK_VOLUME_STRIPE: 428 /// pnfs_block_stripe_volume_info4 bv_stripe_info; 429 /// }; 430 /// 431 /// /* block layout specific type for da_addr_body */ 432 /// struct pnfs_block_deviceaddr4 { 433 /// pnfs_block_volume4 bda_volumes<>; /* array of volumes */ 434 /// }; 435 /// 437 The "pnfs_block_deviceaddr4" data structure is a structure that 438 allows arbitrarily complex nested volume structures to be encoded. 439 The types of aggregations that are allowed are stripes, 440 concatenations, and slices. Note that the volume topology expressed 441 in the pnfs_block_deviceaddr4 data structure will always resolve to a 442 set of pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE. The array 443 of volumes is ordered such that the root of the volume hierarchy is 444 the last element of the array. Concat, slice and stripe volumes MUST 445 refer to volumes defined by lower indexed elements of the array. 447 The "pnfs_block_device_addr4" data structure is returned by the 448 server as the storage-protocol-specific opaque field da_addr_body in 449 the "device_addr4" structure by a successful GETDEVICEINFO operation. 450 [NFSV4.1]. 452 As noted above, all device_addr4 structures eventually resolve to a 453 set of volumes of type PNFS_BLOCK_VOLUME_SIMPLE. These volumes are 454 each uniquely identified by a set of signature components. 455 Complicated volume hierarchies may be composed of dozens of volumes 456 each with several signature components, thus the device address may 457 require several kilobytes. The client SHOULD be prepared to allocate 458 a large buffer to contain the result. In the case of the server 459 returning NFS4ERR_TOOSMALL the client SHOULD allocate a buffer of at 460 least gdir_mincount_bytes to contain the expected result and retry 461 the GETDEVICEINFO request. 463 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4 465 The server in response to a GETDEVICELIST request typically will 466 return a single "deviceid4" in the gdlr_deviceid_list array. This is 467 because the deviceid4 when passed to GETDEVICEINFO will return a 468 "device_addr4" which encodes the entire volume hierarchy. In the 469 case of copy-on-write file systems, the "gdlr_deviceid_list" array 470 may contain two deviceid4's, one referencing the read-only volume 471 hierarchy, and one referencing the writable volume hierarchy. There 472 is no required ordering of the readable and writable ids in the array 473 as the volumes are uniquely identified by their deviceid4, and are 474 referred to by layouts using the deviceid4. Another example of the 475 server returning multiple device items occurs when the file handle 476 represents the root of a name space spanning multiple physical file 477 systems on the server, each with a different volume hierarchy. In 478 this example a server implementation may return either a list of 479 deviceids used by each of the physical file systems, or it may return 480 an empty list. 482 Each deviceid4 returned by a successful GETDEVICELIST operation is a 483 shorthand id used to reference the whole volume topology. These 484 device ids, as well as device ids return in extents of a LAYOUTGET 485 operation, can be used as input to the GETDEVICEINFO operation. 486 Decoding the "pnfs_block_deviceaddr4" results in a flat ordering of 487 data blocks mapped to PNFS_BLOCK_VOLUME_SIMPLE volumes. Combined 488 with the mapping to a client LUN described in 2.2.1 Volume 489 Identification, a logical volume offset can be mapped to a block on a 490 pNFS client LUN. [NFSV4.1] 492 2.3. Data Structures: Extents and Extent Lists 494 A pNFS block layout is a list of extents within a flat array of data 495 blocks in a logical volume. The details of the volume topology can 496 be determined by using the GETDEVICEINFO operation (see discussion of 497 volume identification, section 2.2 above). The block layout 498 describes the individual block extents on the volume that make up the 499 file. The offsets and length contained in an extent are specified in 500 units of bytes. 502 /// enum pnfs_block_extent_state4 { 503 /// PNFS_BLOCK_READ_WRITE_DATA = 0,/* the data located by this 504 /// extent is valid 505 /// for reading and writing. */ 506 /// PNFS_BLOCK_READ_DATA = 1, /* the data located by this 507 /// extent is valid for reading 508 /// only; it may not be 509 /// written. */ 510 /// PNFS_BLOCK_INVALID_DATA = 2, /* the location is valid; the 511 /// data is invalid. It is a 512 /// newly (pre-) allocated 513 /// extent. There is physical 514 /// space on the volume. */ 515 /// PNFS_BLOCK_NONE_DATA = 3 /* the location is invalid. It 516 /// is a hole in the file. 517 /// There is no physical space 518 /// on the volume. */ 519 /// }; 521 /// 522 /// struct pnfs_block_extent4 { 523 /// deviceid4 bex_vol_id; /* id of logical volume on 524 /// which extent of file is 525 /// stored. */ 526 /// offset4 bex_file_offset; /* the starting byte offset in 527 /// the file */ 528 /// length4 bex_length; /* the size in bytes of the 529 /// extent */ 530 /// offset4 bex_storage_offset; /* the starting byte offset 531 /// in the volume */ 532 /// pnfs_block_extent_state4 bex_state; 533 /// /* the state of this extent */ 534 /// }; 535 /// 536 /// /* block layout specific type for loc_body */ 537 /// struct pnfs_block_layout4 { 538 /// pnfs_block_extent4 blo_extents<>; 539 /// /* extents which make up this 540 /// layout. */ 541 /// }; 542 /// 544 The block layout consists of a list of extents which map the logical 545 regions of the file to physical locations on a volume. The 546 "bex_storage_offset" field within each extent identifies a location 547 on the logical volume specified by the "bex_vol_id" field in the 548 extent. The bex_vol_id itself is shorthand for the whole topology of 549 the logical volume on which the file is stored. The client is 550 responsible for translating this logical offset into an offset on the 551 appropriate underlying SAN logical unit. In most cases all extents 552 in a layout will reside on the same volume and thus have the same 553 bex_vol_id. In the case of copy on write file systems, the 554 PNFS_BLOCK_READ_DATA extents may have a different bex_vol_id from the 555 writable extents. 557 Each extent maps a logical region of the file onto a portion of the 558 specified logical volume. The bex_file_offset, bex_length, and 559 bex_state fields for an extent returned from the server are valid for 560 all extents. In contrast, the interpretation of the 561 bex_storage_offset field depends on the value of bex_state as follows 562 (in increasing order): 564 o PNFS_BLOCK_READ_WRITE_DATA means that bex_storage_offset is valid, 565 and points to valid/initialized data that can be read and written. 567 o PNFS_BLOCK_READ_DATA means that bex_storage_offset is valid and 568 points to valid/ initialized data which can only be read. Write 569 operations are prohibited; the client may need to request a read- 570 write layout. 572 o PNFS_BLOCK_INVALID_DATA means that bex_storage_offset is valid, 573 but points to invalid un-initialized data. This data must not be 574 physically read from the disk until it has been initialized. A 575 read request for a PNFS_BLOCK_INVALID_DATA extent must fill the 576 user buffer with zeros, unless the extent is covered by a 577 PNFS_BLOCK_READ_DATA extent of a copy-on-write file system. Write 578 requests must write whole server-sized blocks to the disk; bytes 579 not initialized by the user must be set to zero. Any write to 580 storage in a PNFS_BLOCK_INVALID_DATA extent changes the written 581 portion of the extent to PNFS_BLOCK_READ_WRITE_DATA; the pNFS 582 client is responsible for reporting this change via LAYOUTCOMMIT. 584 o PNFS_BLOCK_NONE_DATA means that bex_storage_offset is not valid, 585 and this extent may not be used to satisfy write requests. Read 586 requests may be satisfied by zero-filling as for 587 PNFS_BLOCK_INVALID_DATA. PNFS_BLOCK_NONE_DATA extents may be 588 returned by requests for readable extents; they are never returned 589 if the request was for a writeable extent. 591 An extent list lists all relevant extents in increasing order of the 592 bex_file_offset of each extent; any ties are broken by increasing 593 order of the extent state (bex_state). 595 2.3.1. Layout Requests and Extent Lists 597 Each request for a layout specifies at least three parameters: file 598 offset, desired size, and minimum size. If the status of a request 599 indicates success, the extent list returned must meet the following 600 criteria: 602 o A request for a readable (but not writeable) layout returns only 603 PNFS_BLOCK_READ_DATA or PNFS_BLOCK_NONE_DATA extents (but not 604 PNFS_BLOCK_INVALID_DATA or PNFS_BLOCK_READ_WRITE_DATA extents). 606 o A request for a writeable layout returns 607 PNFS_BLOCK_READ_WRITE_DATA or PNFS_BLOCK_INVALID_DATA extents (but 608 not PNFS_BLOCK_NONE_DATA extents). It may also return 609 PNFS_BLOCK_READ_DATA extents only when the offset ranges in those 610 extents are also covered by PNFS_BLOCK_INVALID_DATA extents to 611 permit writes. 613 o The first extent in the list MUST contain the requested starting 614 offset. 616 o The total size of extents within the requested range MUST cover at 617 least the minimum size. One exception is allowed: the total size 618 MAY be smaller if only readable extents were requested and EOF is 619 encountered. 621 o Extents in the extent list MUST be logically contiguous for a 622 read-only layout. For a read-write layout, the set of writable 623 extents (i.e., excluding PNFS_BLOCK_READ_DATA extents) MUST be 624 logically contiguous. Every PNFS_BLOCK_READ_DATA extent in a 625 read-write layout MUST be covered by one or more 626 PNFS_BLOCK_INVALID_DATA extents. This overlap of 627 PNFS_BLOCK_READ_DATA and PNFS_BLOCK_INVALID_DATA extents is the 628 only permitted extent overlap. 630 o Extents MUST be ordered in the list by starting offset, with 631 PNFS_BLOCK_READ_DATA extents preceding PNFS_BLOCK_INVALID_DATA 632 extents in the case of equal bex_file_offsets. 634 If the minimum requested size, loga_minlength, is zero, this is an 635 indication to the metadata server that the client desires any layout 636 at offset loga_offset or less that the metadata server has "readily 637 available". Readily is subjective, and depends on the layout type 638 and the pNFS server implementation. For block layout servers, 639 readily available SHOULD be interpreted such that readable layouts 640 are always available, even if some extents are in the 641 PNFS_BLOCK_NONE_DATA state. When processing requests for writable 642 layouts, a layout is readily available if extents can be returned in 643 the PNFS_BLOCK_READ_WRITE_DATA state. 645 2.3.2. Layout Commits 647 /// /* block layout specific type for lou_body */ 648 /// struct pnfs_block_layoutupdate4 { 649 /// pnfs_block_extent4 blu_commit_list<>; 650 /// /* list of extents which 651 /// * now contain valid data. 652 /// */ 653 /// }; 654 /// 656 The "pnfs_block_layoutupdate4" structure is used by the client as the 657 block-protocol specific argument in a LAYOUTCOMMIT operation. The 658 "blu_commit_list" field is an extent list covering regions of the 659 file layout that were previously in the PNFS_BLOCK_INVALID_DATA 660 state, but have been written by the client and should now be 661 considered in the PNFS_BLOCK_READ_WRITE_DATA state. The bex_state 662 field of each extent in the blu_commit_list MUST be set to 663 PNFS_BLOCK_READ_WRITE_DATA. The extents in the commit list MUST be 664 disjoint and MUST be sorted by bex_file_offset. The 665 bex_storage_offset field is unused. Implementers should be aware 666 that a server may be unable to commit regions at a granularity 667 smaller than a file-system block (typically 4KB or 8KB). As noted 668 above, the block-size that the server uses is available as an NFSv4 669 attribute, and any extents included in the "blu_commit_list" MUST be 670 aligned to this granularity and have a size that is a multiple of 671 this granularity. If the client believes that its actions have moved 672 the end-of-file into the middle of a block being committed, the 673 client MUST write zeroes from the end-of-file to the end of that 674 block before committing the block. Failure to do so may result in 675 junk (uninitialized data) appearing in that area if the file is 676 subsequently extended by moving the end-of-file. 678 2.3.3. Layout Returns 680 The LAYOUTRETURN operation is done without any block layout specific 681 data. When the LAYOUTRETURN operation specifies a 682 LAYOUTRETURN4_FILE_return type, then the layoutreturn_file4 data 683 structure specifies the region of the file layout that is no longer 684 needed by the client. The opaque "lrf_body" field of the 685 "layoutreturn_file4" data structure MUST have length zero. A 686 LAYOUTRETURN operation represents an explicit release of resources by 687 the client, usually done for the purpose of avoiding unnecessary 688 CB_LAYOUTRECALL operations in the future. The client may return 689 disjoint regions of the file by using multiple LAYOUTRETURN 690 operations within a single COMPOUND operation. 692 Note that the block/volume layout supports unilateral layout 693 revocation. When a layout is unilaterally revoked by the server, 694 usually due to the client's lease time expiring, or a delegation 695 being recalled, or the client failing to return a layout in a timely 696 manner, it is important for the sake of correctness that any in- 697 flight I/Os that the client issued before the layout was revoked are 698 rejected at the storage. For the block/volume protocol, this is 699 possible by fencing a client with an expired layout timer from the 700 physical storage. Note, however, that the granularity of this 701 operation can only be at the host/logical-unit level. Thus, if one 702 of a client's layouts is unilaterally revoked by the server, it will 703 effectively render useless *all* of the client's layouts for files 704 located on the storage units comprising the logical volume. This may 705 render useless the client's layouts for files in other file systems. 707 2.3.4. Client Copy-on-Write Processing 709 Copy-on-write is a mechanism used to support file and/or file system 710 snapshots. When writing to unaligned regions, or to regions smaller 711 than a file system block, the writer must copy the portions of the 712 original file data to a new location on disk. This behavior can 713 either be implemented on the client or the server. The paragraphs 714 below describe how a pNFS block layout client implements access to a 715 file which requires copy-on-write semantics. 717 Distinguishing the PNFS_BLOCK_READ_WRITE_DATA and 718 PNFS_BLOCK_READ_DATA extent types in combination with the allowed 719 overlap of PNFS_BLOCK_READ_DATA extents with PNFS_BLOCK_INVALID_DATA 720 extents allows copy-on-write processing to be done by pNFS clients. 721 In classic NFS, this operation would be done by the server. Since 722 pNFS enables clients to do direct block access, it is useful for 723 clients to participate in copy-on-write operations. All block/volume 724 pNFS clients MUST support this copy-on-write processing. 726 When a client wishes to write data covered by a PNFS_BLOCK_READ_DATA 727 extent, it MUST have requested a writable layout from the server; 728 that layout will contain PNFS_BLOCK_INVALID_DATA extents to cover all 729 the data ranges of that layout's PNFS_BLOCK_READ_DATA extents. More 730 precisely, for any bex_file_offset range covered by one or more 731 PNFS_BLOCK_READ_DATA extents in a writable layout, the server MUST 732 include one or more PNFS_BLOCK_INVALID_DATA extents in the layout 733 that cover the same bex_file_offset range. When performing a write 734 to such an area of a layout, the client MUST effectively copy the 735 data from the PNFS_BLOCK_READ_DATA extent for any partial blocks of 736 bex_file_offset and range, merge in the changes to be written, and 737 write the result to the PNFS_BLOCK_INVALID_DATA extent for the blocks 738 for that bex_file_offset and range. That is, if entire blocks of 739 data are to be overwritten by an operation, the corresponding 740 PNFS_BLOCK_READ_DATA blocks need not be fetched, but any partial- 741 block writes must be merged with data fetched via 742 PNFS_BLOCK_READ_DATA extents before storing the result via 743 PNFS_BLOCK_INVALID_DATA extents. For the purposes of this 744 discussion, "entire blocks" and "partial blocks" refer to the 745 server's file-system block size. Storing of data in a 746 PNFS_BLOCK_INVALID_DATA extent converts the written portion of the 747 PNFS_BLOCK_INVALID_DATA extent to a PNFS_BLOCK_READ_WRITE_DATA 748 extent; all subsequent reads MUST be performed from this extent; the 749 corresponding portion of the PNFS_BLOCK_READ_DATA extent MUST NOT be 750 used after storing data in a PNFS_BLOCK_INVALID_DATA extent. If a 751 client writes only a portion of an extent, the extent may be split at 752 block aligned boundaries. 754 When a client wishes to write data to a PNFS_BLOCK_INVALID_DATA 755 extent that is not covered by a PNFS_BLOCK_READ_DATA extent, it MUST 756 treat this write identically to a write to a file not involved with 757 copy-on-write semantics. Thus, data must be written in at least 758 block size increments, aligned to multiples of block sized offsets, 759 and unwritten portions of blocks must be zero filled. 761 In the LAYOUTCOMMIT operation that normally sends updated layout 762 information back to the server, for writable data, some 763 PNFS_BLOCK_INVALID_DATA extents may be committed as 764 PNFS_BLOCK_READ_WRITE_DATA extents, signifying that the storage at 765 the corresponding bex_storage_offset values has been stored into and 766 is now to be considered as valid data to be read. 767 PNFS_BLOCK_READ_DATA extents are not committed to the server. For 768 extents that the client receives via LAYOUTGET as 769 PNFS_BLOCK_INVALID_DATA and returns via LAYOUTCOMMIT as 770 PNFS_BLOCK_READ_WRITE_DATA, the server will understand that the 771 PNFS_BLOCK_READ_DATA mapping for that extent is no longer valid or 772 necessary for that file. 774 2.3.5. Extents are Permissions 776 Layout extents returned to pNFS clients grant permission to read or 777 write; PNFS_BLOCK_READ_DATA and PNFS_BLOCK_NONE_DATA are read-only 778 (PNFS_BLOCK_NONE_DATA reads as zeroes), PNFS_BLOCK_READ_WRITE_DATA 779 and PNFS_BLOCK_INVALID_DATA are read/write, (PNFS_BLOCK_INVALID_DATA 780 reads as zeros, any write converts it to PNFS_BLOCK_READ_WRITE_DATA). 781 This is the only client means of obtaining permission to perform 782 direct I/O to storage devices; a pNFS client MUST NOT perform direct 783 I/O operations that are not permitted by an extent held by the 784 client. Client adherence to this rule places the pNFS server in 785 control of potentially conflicting storage device operations, 786 enabling the server to determine what does conflict and how to avoid 787 conflicts by granting and recalling extents to/from clients. 789 Block/volume class storage devices are not required to perform read 790 and write operations atomically. Overlapping concurrent read and 791 write operations to the same data may cause the read to return a 792 mixture of before-write and after-write data. Overlapping write 793 operations can be worse, as the result could be a mixture of data 794 from the two write operations; data corruption can occur if the 795 underlying storage is striped and the operations complete in 796 different orders on different stripes. A pNFS server can avoid these 797 conflicts by implementing a single writer XOR multiple readers 798 concurrency control policy when there are multiple clients who wish 799 to access the same data. This policy MUST be implemented when 800 storage devices do not provide atomicity for concurrent read/write 801 and write/write operations to the same data. 803 If a client makes a layout request that conflicts with an existing 804 layout delegation, the request will be rejected with the error 805 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 806 request after a short interval. During this interval the server 807 SHOULD recall the conflicting portion of the layout delegation from 808 the client that currently holds it. This reject-and-retry approach 809 does not prevent client starvation when there is contention for the 810 layout of a particular file. For this reason a pNFS server SHOULD 811 implement a mechanism to prevent starvation. One possibility is that 812 the server can maintain a queue of rejected layout requests. Each 813 new layout request can be checked to see if it conflicts with a 814 previous rejected request, and if so, the newer request can be 815 rejected. Once the original requesting client retries its request, 816 its entry in the rejected request queue can be cleared, or the entry 817 in the rejected request queue can be removed when it reaches a 818 certain age. 820 NFSv4 supports mandatory locks and share reservations. These are 821 mechanisms that clients can use to restrict the set of I/O operations 822 that are permissible to other clients. Since all I/O operations 823 ultimately arrive at the NFSv4 server for processing, the server is 824 in a position to enforce these restrictions. However, with pNFS 825 layouts, I/Os will be issued from the clients that hold the layouts 826 directly to the storage devices that host the data. These devices 827 have no knowledge of files, mandatory locks, or share reservations, 828 and are not in a position to enforce such restrictions. For this 829 reason the NFSv4 server MUST NOT grant layouts that conflict with 830 mandatory locks or share reservations. Further, if a conflicting 831 mandatory lock request or a conflicting open request arrives at the 832 server, the server MUST recall the part of the layout in conflict 833 with the request before granting the request. 835 2.3.6. End-of-file Processing 837 The end-of-file location can be changed in two ways: implicitly as 838 the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 839 or explicitly as the result of a SETATTR request. Typically, when a 840 file is truncated by an NFSv4 client via the SETATTR call, the server 841 frees any disk blocks belonging to the file which are beyond the new 842 end-of-file byte, and MUST write zeros to the portion of the new end- 843 of-file block beyond the new end-of-file byte. These actions render 844 any pNFS layouts which refer to the blocks that are freed or written 845 semantically invalid. Therefore, the server MUST recall from clients 846 the portions of any pNFS layouts which refer to blocks that will be 847 freed or written by the server before processing the truncate 848 request. These recalls may take time to complete; as explained in 849 [NFSv4.1], if the server cannot respond to the client SETATTR request 850 in a reasonable amount of time, it SHOULD reply to the client with 851 the error NFS4ERR_DELAY. 853 Blocks in the PNFS_BLOCK_INVALID_DATA state which lie beyond the new 854 end-of-file block present a special case. The server has reserved 855 these blocks for use by a pNFS client with a writable layout for the 856 file, but the client has yet to commit the blocks, and they are not 857 yet a part of the file mapping on disk. The server MAY free these 858 blocks while processing the SETATTR request. If so, the server MUST 859 recall any layouts from pNFS clients which refer to the blocks before 860 processing the truncate. If the server does not free the 861 PNFS_BLOCK_INVALID_DATA blocks while processing the SETATTR request, 862 it need not recall layouts which refer only to the PNFS_BLOCK_INVALID 863 DATA blocks. 865 When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 866 the current end-of-file, or extended explicitly by a SETATTR request, 867 the server need not recall any portions of any pNFS layouts. 869 2.3.7. Layout Hints 871 The SETATTR operation supports a layout hint attribute [NFSv4.1]. 872 When the client sets a layout hint (data type layouthint4) with a 873 layout type of LAYOUT4_BLOCK_VOLUME (the loh_type field), the 874 loh_body field contains a value of data type pnfs_block_layouthint4. 876 /// /* block layout specific type for loh_body */ 877 /// struct pnfs_block_layouthint4 { 878 /// uint64_t blh_maximum_io_time; /* maximum i/o time in seconds 879 /// */ 880 /// }; 881 /// 883 The block layout client uses the layout hint data structure to 884 communicate to the server the maximum time that it may take an I/O to 885 execute on the client. Clients using block layouts MUST set the 886 layout hint attribute before using LAYOUTGET operations. 888 2.3.8. Client Fencing 890 The pNFS block protocol must handle situations in which a system 891 failure, typically a network connectivity issue, requires the server 892 to unilaterally revoke extents from one client in order to transfer 893 the extents to another client. The pNFS server implementation MUST 894 ensure that when resources are transferred to another client, they 895 are not used by the client originally owning them, and this must be 896 ensured against any possible combination of partitions and delays 897 among all of the participants to the protocol (server, storage and 898 client). Two approaches to guaranteeing this isolation are possible 899 and are discussed below. 901 One implementation choice for fencing the block client from the block 902 storage is the use of LUN masking or mapping at the storage systems 903 or storage area network to disable access by the client to be 904 isolated. This requires server access to a management interface for 905 the storage system and authorization to perform LUN masking and 906 management operations. For example, SMI-S [SMIS] provides a means to 907 discover and mask LUNs, including a means of associating clients with 908 the necessary World Wide Names or Initiator names to be masked. 910 In the absence of support for LUN masking, the server has to rely on 911 the clients to implement a timed lease I/O fencing mechanism. 912 Because clients do not know if the server is using LUN masking, in 913 all cases the client MUST implement timed lease fencing. In timed 914 lease fencing we define two time periods, the first, "lease_time" is 915 the length of a lease as defined by the server's lease_time attribute 916 (see [NFSV4.1]), and the second, "blh_maximum_io_time" is the maximum 917 time it can take for a client I/O to the storage system to either 918 complete or fail; this value is often 30 seconds or 60 seconds, but 919 may be longer in some environments. If the maximum client I/O time 920 cannot be bounded, the client MUST use a value of all 1s as the 921 blh_maximum_io_time. 923 The client MUST use SETATTR with a layout hint of type 924 LAYOUT4_BLOCK_VOLUME to inform the server of its maximum I/O time 925 prior to issuing the first LAYOUTGET operation. The maximum io time 926 hint is a per client attribute, and as such the server SHOULD 927 maintain the value set by each client. A server which implements 928 fencing via LUN masking SHOULD accept any maximum io time value from 929 a client. A server which does not implement fencing may return an 930 error NFS4ERR_INVAL to the SETATTR operation. Such a server SHOULD 931 return NFS4ERR_INVAL when a client sends an unbounded maximum I/O 932 time (all 1s), or when the maximum I/O time is significantly greater 933 than that of other clients using block layouts with pNFS. 935 When a client receives the error NFS4ERR_INVAL in response to the 936 SETATTR operation for a layout hint, the client MUST NOT use the 937 LAYOUTGET operation. After responding with NFS4ERR_INVAL to the 938 SETATTR for layout hint, the server MUST return the error 939 NFS4ERR_LAYOUTUNAVAILABLE to all subsequent LAYOUTGET operations from 940 that client. Thus the server, by returning either NFS4ERR_INVAL or 941 NFS4_OK determines whether or not a client with a large, or an 942 unbounded maximum I/O time may use pNFS. 944 Using the lease time and the maximum i/o time values, we specify the 945 behavior of the client and server as follows. 947 When a client receives layout information via a LAYOUTGET operation, 948 those layouts are valid for at most "lease_time" seconds from when 949 the server granted them. A layout is renewed by any successful 950 SEQUENCE operation, or whenever a new stateid is created or updated 951 (see the section "Lease Renewal" of [NFSV4.1]). If the layout lease 952 is not renewed prior to expiration, the client MUST cease to use the 953 layout after "lease_time" seconds from when it either sent the 954 original LAYOUTGET command, or sent the last operation renewing the 955 lease. In other words, the client may not issue any I/O to blocks 956 specified by an expired layout. In the presence of large 957 communication delays between the client and server it is even 958 possible for the lease to expire prior to the server response 959 arriving at the client. In such a situation the client MUST NOT use 960 the expired layouts, and SHOULD revert to using standard NFSv41 READ 961 and WRITE operations. Furthermore, the client must be configured 962 such that I/O operations complete within the "blh_maximum_io_time" 963 even in the presence of multipath drivers that will retry I/Os via 964 multiple paths. 966 As stated in the section "Dealing with Lease Expiration on the 967 Client" of [NFSV4.1], if any SEQUENCE operation is successful, but 968 sr_status_flag has SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED, 969 SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED, or 970 SEQ4_STATUS_ADMIN_STATE_REVOKED set, the client MUST immediately 971 cease to use all layouts and device id to device address mappings 972 associated with the corresponding server. 974 In the absence of known two way communication between the client and 975 the server on the fore channel, the server must wait for at least the 976 time period "lease_time" plus "blh_maximum_io_time" before 977 transferring layouts from the original client to any other client. 978 The server, like the client, must take a conservative approach, and 979 start the lease expiration timer from the time that it received the 980 operation which last renewed the lease. 982 2.4. Crash Recovery Issues 984 A critical requirement in crash recovery is that both the client and 985 the server know when the other has failed. Additionally, it is 986 required that a client sees a consistent view of data across server 987 restarts. These requirements and a full discussion of crash recovery 988 issues are covered in the "Crash Recovery" section of the NFSv41 989 specification [NFSV4.1]. This document contains additional crash 990 recovery material specific only to the block/volume layout. 992 When the server crashes while the client holds a writable layout, and 993 the client has written data to blocks covered by the layout, and the 994 blocks are still in the PNFS_BLOCK_INVALID_DATA state, the client has 995 two options for recovery. If the data that has been written to these 996 blocks is still cached by the client, the client can simply re-write 997 the data via NFSv4, once the server has come back online. However, 998 if the data is no longer in the client's cache, the client MUST NOT 999 attempt to source the data from the data servers. Instead, it should 1000 attempt to commit the blocks in question to the server during the 1001 server's recovery grace period, by sending a LAYOUTCOMMIT with the 1002 "loca_reclaim" flag set to true. This process is described in detail 1003 in [NFSv4.1] section 18.42.4. 1005 2.5. Recalling resources: CB_RECALL_ANY 1007 The server may decide that it cannot hold all of the state for 1008 layouts without running out of resources. In such a case, it is free 1009 to recall individual layouts using CB_LAYOUTRECALL to reduce the 1010 load, or it may choose to request that the client return any layout. 1012 The NFSv4.1 spec [NFSv4.1] defines the following types: 1014 const RCA4_TYPE_MASK_BLK_LAYOUT = 4; 1016 struct CB_RECALL_ANY4args { 1017 uint32_t craa_objects_to_keep; 1018 bitmap4 craa_type_mask; 1019 }; 1021 When the server sends a CB_RECALL_ANY request to a client specifying 1022 the RCA4_TYPE_MASK_BLK_LAYOUT bit in craa_type_mask, the client 1023 should immediately respond with NFS4_OK, and then asynchronously 1024 return complete file layouts until the number of files with layouts 1025 cached on the client is less than craa_object_to_keep. 1027 2.6. Transient and Permanent Errors 1029 The server may respond to LAYOUTGET with a variety of error statuses. 1030 These errors can convey transient conditions or more permanent 1031 conditions that are unlikely to be resolved soon. 1033 The transient errors, NFS4ERR_RECALLCONFLICT and NFS4ERR_TRYLATER are 1034 used to indicate that the server cannot immediately grant the layout 1035 to the client. In the former case this is because the server has 1036 recently issued a CB_LAYOUTRECALL to the requesting client, whereas 1037 in the case of NFS4ERR_TRYLATER, the server cannot grant the request 1038 possibly due to sharing conflicts with other clients. In either 1039 case, a reasonable approach for the client is to wait several 1040 milliseconds and retry the request. The client SHOULD track the 1041 number of retries, and if forward progress is not made, the client 1042 SHOULD send the READ or WRITE operation directly to the server. 1044 The error NFS4ERR_LAYOUTUNAVAILABLE may be returned by the server if 1045 layouts are not supported for the requested file or its containing 1046 file system. The server may also return this error code if the 1047 server is the progress of migrating the file from secondary storage, 1048 or for any other reason which causes the server to be unable to 1049 supply the layout. As a result of receiving 1050 NFS4ERR_LAYOUTUNAVAILABLE, the client SHOULD send future READ and 1051 WRITE requests directly to the server. It is expected that a client 1052 will not cache the file's layoutunavailable state forever, particular 1053 if the file is closed, and thus eventually, the client MAY reissue a 1054 LAYOUTGET operation. 1056 3. Security Considerations 1058 Typically, SAN disk arrays and SAN protocols provide access control 1059 mechanisms (e.g., logical unit number mapping and/or masking) which 1060 operate at the granularity of individual hosts. The functionality 1061 provided by such mechanisms makes it possible for the server to 1062 "fence" individual client machines from certain physical disks---that 1063 is to say, to prevent individual client machines from reading or 1064 writing to certain physical disks. Finer-grained access control 1065 methods are not generally available. For this reason, certain 1066 security responsibilities are delegated to pNFS clients for 1067 block/volume layouts. Block/volume storage systems generally control 1068 access at a volume granularity, and hence pNFS clients have to be 1069 trusted to only perform accesses allowed by the layout extents they 1070 currently hold (e.g., and not access storage for files on which a 1071 layout extent is not held). In general, the server will not be able 1072 to prevent a client which holds a layout for a file from accessing 1073 parts of the physical disk not covered by the layout. Similarly, the 1074 server will not be able to prevent a client from accessing blocks 1075 covered by a layout that it has already returned. This block-based 1076 level of protection must be provided by the client software. 1078 An alternative method of block/volume protocol use is for the storage 1079 devices to export virtualized block addresses, which do reflect the 1080 files to which blocks belong. These virtual block addresses are 1081 exported to pNFS clients via layouts. This allows the storage device 1082 to make appropriate access checks, while mapping virtual block 1083 addresses to physical block addresses. In environments where the 1084 security requirements are such that client-side protection from 1085 access to storage outside of the authorized layout extents is not 1086 sufficient, pNFS block/volume storage layouts SHOULD NOT be used 1087 unless the storage device is able to implement the appropriate access 1088 checks, via use of virtualized block addresses or other means. In 1089 contrast, an environment where client-side protection may suffice 1090 consists of co-located clients, server and storage systems in a 1091 datacenter with a physically isolated SAN under control of a single 1092 system administrator or small group of system administrators. 1094 This also has implications for some NFSv4 functionality outside pNFS. 1095 For instance, if a file is covered by a mandatory read-only lock, the 1096 server can ensure that only readable layouts for the file are granted 1097 to pNFS clients. However, it is up to each pNFS client to ensure 1098 that the readable layout is used only to service read requests, and 1099 not to allow writes to the existing parts of the file. Similarly, 1100 block/volume storage devices are unable to validate NFS Access 1101 Control Lists (ACLs) and file open modes, so the client must enforce 1102 the policies before sending a read or write request to the storage 1103 device. Since block/volume storage systems are generally not capable 1104 of enforcing such file-based security, in environments where pNFS 1105 clients cannot be trusted to enforce such policies, pNFS block/volume 1106 storage layouts SHOULD NOT be used. 1108 Access to block/volume storage is logically at a lower layer of the 1109 I/O stack than NFSv4, and hence NFSv4 security is not directly 1110 applicable to protocols that access such storage directly. Depending 1111 on the protocol, some of the security mechanisms provided by NFSv4 1112 (e.g., encryption, cryptographic integrity) may not be available, or 1113 may be provided via different means. At one extreme, pNFS with 1114 block/volume storage can be used with storage access protocols (e.g., 1115 parallel SCSI) that provide essentially no security functionality. 1116 At the other extreme, pNFS may be used with storage protocols such as 1117 iSCSI that can provide significant security functionality. It is the 1118 responsibility of those administering and deploying pNFS with a 1119 block/volume storage access protocol to ensure that appropriate 1120 protection is provided to that protocol (physical security is a 1121 common means for protocols not based on IP). In environments where 1122 the security requirements for the storage protocol cannot be met, 1123 pNFS block/volume storage layouts SHOULD NOT be used. 1125 When security is available for a storage protocol, it is generally at 1126 a different granularity and with a different notion of identity than 1127 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 1128 initiator access to volumes). The responsibility for enforcing 1129 appropriate correspondences between these security layers is placed 1130 upon the pNFS client. As with the issues in the first paragraph of 1131 this section, in environments where the security requirements are 1132 such that client-side protection from access to storage outside of 1133 the layout is not sufficient, pNFS block/volume storage layouts 1134 SHOULD NOT be used. 1136 4. Conclusions 1138 This draft specifies the block/volume layout type for pNFS and 1139 associated functionality. 1141 5. IANA Considerations 1143 There are no IANA considerations in this document. All pNFS IANA 1144 Considerations are covered in [NFSV4.1]. 1146 6. Acknowledgments 1148 This draft draws extensively on the authors' familiarity with the 1149 mapping functionality and protocol in EMC's MPFS (previously named 1150 HighRoad) system [MPFS]. The protocol used by MPFS is called FMP 1151 (File Mapping Protocol); it is an add-on protocol that runs in 1152 parallel with file system protocols such as NFSv3 to provide pNFS- 1153 like functionality for block/volume storage. While drawing on FMP, 1154 the data structures and functional considerations in this draft 1155 differ in significant ways, based on lessons learned and the 1156 opportunity to take advantage of NFSv4 features such as COMPOUND 1157 operations. The design to support pNFS client participation in copy- 1158 on-write is based on text and ideas contributed by Craig Everhart. 1160 Andy Adamson, Ben Campbell, Richard Chandler, Benny Halevy, Fredric 1161 Isaman, and Mario Wurzl all helped to review drafts of this 1162 specification. 1164 7. References 1166 7.1. Normative References 1168 [LEGAL] IETF Trust, "Legal Provisions Relating to IETF Documents", 1169 URL http://trustee.ietf.org/docs/IETF-Trust-License- 1170 Policy.pdf, November 2008. 1172 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1173 Requirement Levels", BCP 14, RFC 2119, March 1997. 1175 [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 1176 Version 1", RFC [[RFC Editor: please insert NFSv4 Minor 1177 Version 1 RFC number]], [[RFC Editor: please insert NFSv4 1178 Minor Version 1 RFC month]] [[RFC Editor: please insert 1179 NFSv4 Minor Version 1 year]. 1180 . 1183 [XDR] Eisler, M., "XDR: External Data Representation Standard", 1184 STD 67, RFC 4506, May 2006. 1186 7.2. Informative References 1188 [MPFS] EMC Corporation, "EMC Celerra Multi-Path File System", EMC 1189 Data Sheet, available at: 1190 http://www.emc.com/collateral/software/data-sheet/h2006-celerra-mpfs- 1191 mpfsi.pdf 1192 link checked 13 March 2008 1194 [SMIS] SNIA, "SNIA Storage Management Initiative Specification", 1195 version 1.0.2, available at: 1196 http://www.snia.org/tech_activities/standards/curr_standards/smi/SMI- 1197 S_Technical_Position_v1.0.3r1.pdf 1198 link checked 13 March 2008 1200 Authors' Addresses 1202 David L. Black 1203 EMC Corporation 1204 176 South Street 1205 Hopkinton, MA 01748 1207 Phone: +1 (508) 293-7953 1208 Email: black_david@emc.com 1210 Stephen Fridella 1211 EMC Corporation 1212 228 South Street 1213 Hopkinton, MA 01748 1215 Phone: +1 (508) 249-3528 1216 Email: fridella_stephen@emc.com 1218 Jason Glasgow 1219 Google 1220 5 Cambridge Center 1221 Cambridge, MA 02142 1223 Phone: +1 (617) 575 1599 1224 Email: jglasgow@aya.yale.edu