idnits 2.17.1 draft-ietf-nfsv4-pnfs-block-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1137. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1114. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1121. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1127. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == Line 262 has weird spacing: '... opaque conte...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 17, 2008) is 5877 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFSv4 Working Group D. Black 3 Internet Draft S. Fridella 4 Expires: September 17, 2008 J. Glasgow 5 Intended Status: Proposed Standard EMC Corporation 6 March 17, 2008 8 pNFS Block/Volume Layout 9 draft-ietf-nfsv4-pnfs-block-07.txt 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that 14 any applicable patent or other IPR claims of which he or she is 15 aware have been or will be disclosed, and any of which he or she 16 becomes aware will be disclosed, in accordance with Section 6 of 17 BCP 79. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six months 25 and may be updated, replaced, or obsoleted by other documents at any 26 time. It is inappropriate to use Internet-Drafts as reference 27 material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html 35 This Internet-Draft will expire in September 2008. 37 Abstract 39 Parallel NFS (pNFS) extends NFSv4 to allow clients to directly access 40 file data on the storage used by the NFSv4 server. This ability to 41 bypass the server for data access can increase both performance and 42 parallelism, but requires additional client functionality for data 43 access, some of which is dependent on the class of storage used. The 44 main pNFS operations draft specifies storage-class-independent 45 extensions to NFS; this draft specifies the additional extensions 46 (primarily data structures) for use of pNFS with block and volume 47 based storage. 49 Conventions used in this document 51 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 52 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 53 document are to be interpreted as described in RFC-2119 [RFC2119]. 55 Table of Contents 57 1. Introduction...................................................3 58 1.1. General Definitions.......................................3 59 1.2. XDR Description of NFSv4.1 block layout...................4 60 2. Block Layout Description.......................................5 61 2.1. Background and Architecture...............................5 62 2.2. GETDEVICELIST and GETDEVICEINFO...........................6 63 2.2.1. Volume Identification................................6 64 2.2.2. Volume Topology......................................7 65 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4...........10 66 2.3. Data Structures: Extents and Extent Lists................10 67 2.3.1. Layout Requests and Extent Lists....................13 68 2.3.2. Layout Commits......................................14 69 2.3.3. Layout Returns......................................14 70 2.3.4. Client Copy-on-Write Processing.....................15 71 2.3.5. Extents are Permissions.............................16 72 2.3.6. End-of-file Processing..............................18 73 2.3.7. Layout Hints........................................18 74 2.3.8. Client Fencing......................................19 75 2.4. Crash Recovery Issues....................................21 76 2.5. Recalling resources: CB_RECALL_ANY.......................21 77 2.6. Transient and Permanent Errors...........................22 78 3. Security Considerations.......................................22 79 4. Conclusions...................................................24 80 5. IANA Considerations...........................................24 81 6. Acknowledgments...............................................24 82 7. References....................................................24 83 7.1. Normative References.....................................25 84 7.2. Informative References...................................25 85 Author's Addresses...............................................25 86 Intellectual Property Statement..................................26 87 Disclaimer of Validity...........................................26 88 Copyright Statement..............................................27 89 Acknowledgment...................................................27 91 1. Introduction 93 Figure 1 shows the overall architecture of a pNFS system: 95 +-----------+ 96 |+-----------+ +-----------+ 97 ||+-----------+ | | 98 ||| | NFSv4.1 + pNFS | | 99 +|| Clients |<------------------------------>| Server | 100 +| | | | 101 +-----------+ | | 102 ||| +-----------+ 103 ||| | 104 ||| | 105 ||| +-----------+ | 106 ||| |+-----------+ | 107 ||+----------------||+-----------+ | 108 |+-----------------||| | | 109 +------------------+|| Storage |------------+ 110 +| Systems | 111 +-----------+ 113 Figure 1 pNFS Architecture 115 The overall approach is that pNFS-enhanced clients obtain sufficient 116 information from the server to enable them to access the underlying 117 storage (on the Storage Systems) directly. See the pNFS portion of 118 [NFSV4.1] for more details. This draft is concerned with access from 119 pNFS clients to Storage Systems over storage protocols based on 120 blocks and volumes, such as the SCSI protocol family (e.g., parallel 121 SCSI, FCP for Fibre Channel, iSCSI, SAS, and FCoE). This class of 122 storage is referred to as block/volume storage. While the Server to 123 Storage System protocol is not of concern for interoperability here, 124 it will typically also be a block/volume protocol when clients use 125 block/ volume protocols. 127 1.1. General Definitions 129 The following definitions are provided for the purpose of providing 130 an appropriate context for the reader. 132 Byte 134 This document defines a byte as an octet, i.e. a datum exactly 8 135 bits in length. 137 Client 139 The "client" is the entity that accesses the NFS server's 140 resources. The client may be an application which contains the 141 logic to access the NFS server directly. The client may also be 142 the traditional operating system client that provides remote file 143 system services for a set of applications. 145 Server 147 The "Server" is the entity responsible for coordinating client 148 access to a set of file systems and is identified by a Server 149 owner. 151 1.2. XDR Description 153 This document contains the XDR ([XDR]) description of the NFSv4.1 154 block layout protocol. The XDR description is embedded in this 155 document in a way that makes it simple for the reader to extract into 156 a ready to compile form. The reader can feed this document into the 157 following shell script to produce the machine readable XDR 158 description of the NFSv4.1 block layout: 160 #!/bin/sh 161 grep "^ *///" | sed 's?^ *///??' 163 I.e. if the above script is stored in a file called "extract.sh", and 164 this document is in a file called "spec.txt", then the reader can do: 166 sh extract.sh < spec.txt > nfs4_block_layout_spec.x 168 The effect of the script is to remove both leading white space and a 169 sentinel sequence of "///" from each matching line. 171 The embedded XDR file header follows, with subsequent pieces embedded 172 throughout the document: 174 ////* 175 /// * This file was machine generated for 176 /// * draft-ietf-nfsv4-pnfs-block-07 177 /// * Last updated Tue Jan 29 02:57:06 CST 2008 178 /// */ 179 ////* 180 /// * Copyright (C) The IETF Trust (2007-2008) 181 /// * All Rights Reserved. 182 /// * 183 /// * Copyright (C) The Internet Society (1998-2006). 184 /// * All Rights Reserved. 185 /// */ 186 /// 187 ////* 188 /// * nfs4_block_layout_prot.x 189 /// */ 190 /// 191 ///%#include "nfsv41.h" 192 /// 194 The XDR code contained in this document depends on types from 195 nfsv41.x file. This includes both nfs types that end with a 4, such 196 as offset4, length4, etc, as well as more generic types such as 197 uint32_t and uint64_t. 199 2. Block Layout Description 201 2.1. Background and Architecture 203 The fundamental storage abstraction supported by block/volume storage 204 is a storage volume consisting of a sequential series of fixed size 205 blocks. This can be thought of as a logical disk; it may be realized 206 by the Storage System as a physical disk, a portion of a physical 207 disk or something more complex (e.g., concatenation, striping, RAID, 208 and combinations thereof) involving multiple physical disks or 209 portions thereof. 211 A pNFS layout for this block/volume class of storage is responsible 212 for mapping from an NFS file (or portion of a file) to the blocks of 213 storage volumes that contain the file. The blocks are expressed as 214 extents with 64 bit offsets and lengths using the existing NFSv4 215 offset4 and length4 types. Clients must be able to perform I/O to 216 the block extents without affecting additional areas of storage 217 (especially important for writes), therefore extents MUST be aligned 218 to 512-byte boundaries, and writable extents MUST be aligned to the 219 block size used by the NFSv4 server in managing the actual file 220 system (4 kilobytes and 8 kilobytes are common block sizes). This 221 block size is available as the NFSv4.1 layout_blksize attribute. 222 [NFSV4.1]. Readable extents SHOULD be aligned to the block size used 223 by the NFSv4 server, but in order to support legacy file systems with 224 fragments, alignment to 512 byte boundaries is acceptable. 226 The pNFS operation for requesting a layout (LAYOUTGET) includes the 227 "layoutiomode4 loga_iomode" argument which indicates whether the 228 requested layout is for read-only use or read-write use. A read-only 229 layout may contain holes that are read as zero, whereas a read-write 230 layout will contain allocated, but un-initialized storage in those 231 holes (read as zero, can be written by client). This draft also 232 supports client participation in copy on write (e.g. for file systems 233 with snapshots) by providing both read-only and un-initialized 234 storage for the same range in a layout. Reads are initially 235 performed on the read-only storage, with writes going to the un- 236 initialized storage. After the first write that initializes the un- 237 initialized storage, all reads are performed to that now-initialized 238 writeable storage, and the corresponding read-only storage is no 239 longer used. 241 2.2. GETDEVICELIST and GETDEVICEINFO 243 2.2.1. Volume Identification 245 Storage Systems such as storage arrays can have multiple physical 246 network ports that need not be connected to a common network, 247 resulting in a pNFS client having simultaneous multipath access to 248 the same storage volumes via different ports on different networks. 249 The networks may not even be the same technology - for example, 250 access to the same volume via both iSCSI and Fibre Channel is 251 possible, hence network addresses are difficult to use for volume 252 identification. For this reason, this pNFS block layout identifies 253 storage volumes by content, for example providing the means to match 254 (unique portions of) labels used by volume managers. Any block pNFS 255 system using this layout MUST support a means of content-based unique 256 volume identification that can be employed via the data structure 257 given here. 259 ///struct pnfs_block_sig_component4 { /* disk signature component */ 260 /// int64_t sig_offset; /* byte offset of component 261 /// on volume*/ 262 /// opaque contents<>; /* contents of this component 263 /// of the signature */ 264 ///}; 265 /// 267 Note that the opaque "contents" field in the 268 "pnfs_block_sig_component4" structure MUST NOT be interpreted as a 269 zero-terminated string, as it may contain embedded zero-valued bytes. 270 There are no restrictions on alignment (e.g., neither sig_offset nor 271 the length are required to be multiples of 4). The sig_offset is a 272 signed quantity which when positive represents an byte offset from 273 the start of the volume, and when negative represents an byte offset 274 from the end of the volume. 276 Negative offsets are permitted in order to simplify the client 277 implementation on systems where the device label is found at a fixed 278 offset from the end of the volume. If the server uses negative 279 offsets to describe the signature, then the client and server MUST 280 NOT see different volume sizes. Negative offsets SHOULD NOT be used 281 in systems that dynamically resize volumes unless care is taken to 282 ensure that the device label is always present at the offset from the 283 end of the volume as seen by the clients. 285 A signature is an array of up to "PNFS_BLOCK_MAX_SIG_COMP" (defined 286 below) signature components. The client MUST NOT assume that all 287 signature components are colocated within a single sector on a block 288 device. 290 The pNFS client block layout driver uses this volume identification 291 to map pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE deviceid4s to 292 its local view of a LUN. 294 2.2.2. Volume Topology 296 The pNFS block server volume topology is expressed as an arbitrary 297 combination of base volume types enumerated in the following data 298 structures. The individual components of the topology are contained 299 in an array and components may refer to other components by using 300 array indices. 302 ///enum pnfs_block_volume_type4 { 303 /// PNFS_BLOCK_VOLUME_SIMPLE = 0, /* volume maps to a single 304 /// LU */ 305 /// PNFS_BLOCK_VOLUME_SLICE = 1, /* volume is a slice of 306 /// another volume */ 307 /// PNFS_BLOCK_VOLUME_CONCAT = 2, /* volume is a 308 /// concatenation of 309 /// multiple volumes */ 310 /// PNFS_BLOCK_VOLUME_STRIPE = 3 /* volume is striped across 311 /// multiple volumes */ 312 ///}; 313 /// 314 ///const PNFS_BLOCK_MAX_SIG_COMP = 16; /* maximum components per 315 /// signature */ 316 ///struct pnfs_block_simple_volume_info4 { 317 /// pnfs_block_sig_component4 ds; 318 /// /* disk signature */ 319 ///}; 320 /// 321 /// 322 ///struct pnfs_block_slice_volume_info4 { 323 /// offset4 start; /* offset of the start of the 324 /// slice in bytes */ 325 /// length4 length; /* length of slice in bytes */ 326 /// uint32_t volume; /* array index of sliced 327 /// volume */ 328 ///}; 329 /// 330 ///struct pnfs_block_concat_volume_info4 { 331 /// uint32_t volumes<>; /* array indices of volumes 332 /// which are concatenated */ 333 ///}; 334 /// 335 ///struct pnfs_block_stripe_volume_info4 { 336 /// length4 stripe_unit; /* size of stripe in bytes */ 337 /// uint32_t volumes<>; /* array indices of volumes 338 /// which are striped across -- 339 /// MUST be same size */ 340 ///}; 341 /// 342 ///union pnfs_block_volume4 switch (pnfs_block_volume_type4 type) { 343 /// case PNFS_BLOCK_VOLUME_SIMPLE: 344 /// pnfs_block_simple_volume_info4 simple_info; 345 /// case PNFS_BLOCK_VOLUME_SLICE: 346 /// pnfs_block_slice_volume_info4 slice_info; 347 /// case PNFS_BLOCK_VOLUME_CONCAT: 348 /// pnfs_block_concat_volume_info4 concat_info; 349 /// case PNFS_BLOCK_VOLUME_STRIPE: 350 /// pnfs_block_stripe_volume_info4 stripe_info; 351 ///}; 352 /// 353 ////* block layout specific type for da_addr_body */ 354 ///struct pnfs_block_deviceaddr4 { 355 /// pnfs_block_volume4 volumes<>; /* array of volumes */ 356 ///}; 357 /// 359 The "pnfs_block_deviceaddr4" data structure is a structure that 360 allows arbitrarily complex nested volume structures to be encoded. 361 The types of aggregations that are allowed are stripes, 362 concatenations, and slices. Note that the volume topology expressed 363 in the pnfs_block_deviceaddr4 data structure will always resolve to a 364 set of pnfs_block_volume_type4 PNFS_BLOCK_VOLUME_SIMPLE. The array 365 of volumes is ordered such that the root of the volume hierarchy is 366 the last element of the array. Concat, slice and stripe volumes MUST 367 refer to volumes defined by lower indexed elements of the array. 369 The "pnfs_block_device_addr4" data structure is returned by the 370 server as the storage-protocol-specific opaque field da_addr_body in 371 the "device_addr4" structure by a successful GETDEVICELIST operation. 372 [NFSV4.1]. 374 As noted above, all device_addr4 structures eventually resolve to a 375 set of volumes of type PNFS_BLOCK_VOLUME_SIMPLE. These volumes are 376 each uniquely identified by a set of signature components. 377 Complicated volume hierarchies may be composed of dozens of volumes 378 each with several signature components, thus the device address may 379 require several kilobytes. The client SHOULD be prepared to allocate 380 a large buffer to contain the result. In the case of the server 381 returning NFS4ERR_TOOSMALL the client SHOULD allocate a buffer of at 382 least gdir_mincount_bytes to contain the expected result and retry 383 the GETDEVICEINFO request. 385 2.2.3. GETDEVICELIST and GETDEVICEINFO deviceid4 387 The server in response to a GETDEVICELIST request typically will 388 return a single "deviceid4" in the gdlr_deviceid_list array. This is 389 because the deviceid4 when passed to GETDEVICEINFO will return a 390 "device_addr4" which encodes the entire volume hierarchy. In the 391 case of copy-on-write file systems, the "gdlr_deviceid_list" array 392 may contain two deviceid4's, one referencing the read-only volume 393 hierarchy, and one referencing the writable volume hierarchy. There 394 is no required ordering of the readable and writable ids in the array 395 as the volumes are uniquely identified by their deviceid4, and are 396 referred to by layouts using the deviceid4. Another example of the 397 server returning multiple device items occurs when the file handle 398 represents the root of a name space spanning multiple physical file 399 systems on the server, each with a different volume hierarchy. In 400 this example a server implementation may return either a list of 401 deviceids used by each of the physical file systems, or it may return 402 an empty list. 404 Each deviceid4 returned by a successful GETDEVICELIST operation is a 405 shorthand id used to reference the whole volume topology. These 406 device ids, as well as device ids return in extents of a LAYOUTGET 407 operation, can be used as input to the GETDEVICEINFO operation. 408 Decoding the "pnfs_block_deviceaddr4" results in a flat ordering of 409 data blocks mapped to PNFS_BLOCK_VOLUME_SIMPLE volumes. Combined 410 with the mapping to a client LUN described in 2.2.1 Volume 411 Identification, a logical volume offset can be mapped to a block on a 412 pNFS client LUN. [NFSV4.1] 414 2.3. Data Structures: Extents and Extent Lists 416 A pNFS block layout is a list of extents within a flat array of data 417 blocks in a logical volume. The details of the volume topology can 418 be determined by using the GETDEVICEINFO operation (see discussion of 419 volume identification, section 2.2 above). The block layout 420 describes the individual block extents on the volume that make up the 421 file. The offsets and length contained in an extent are specified in 422 units of bytes. 424 ///enum pnfs_block_extent_state4 { 425 /// PNFS_BLOCK_READWRITE_DATA = 0, /* the data located by this 426 /// extent is valid 427 /// for reading and writing. */ 428 /// PNFS_BLOCK_READ_DATA = 1, /* the data located by this 429 /// extent is valid for reading 430 /// only; it may not be 431 /// written. */ 432 /// PNFS_BLOCK_INVALID_DATA = 2, /* the location is valid; the 433 /// data is invalid. It is a 434 /// newly (pre-) allocated 435 /// extent. There is physical 436 /// space on the volume. */ 437 /// PNFS_BLOCK_NONE_DATA = 3 /* the location is invalid. It 438 /// is a hole in the file. 439 /// There is no physical space 440 /// on the volume. */ 441 ///}; 443 /// 444 ///struct pnfs_block_extent4 { 445 /// deviceid4 vol_id; /* id of logical volume on 446 /// which extent of file is 447 /// stored. */ 448 /// offset4 file_offset; /* the starting byte offset in 449 /// the file */ 450 /// length4 extent_length; /* the size in bytes of the 451 /// extent */ 452 /// offset4 storage_offset; /* the starting byte offset in 453 /// the volume */ 454 /// pnfs_block_extent_state4 es; /* the state of this extent */ 455 ///}; 456 /// 457 ////* block layout specific type for loc_body */ 458 ///struct pnfs_block_layout4 { 459 /// pnfs_block_extent4 extents<>; /* extents which make up this 460 /// layout. */ 461 ///}; 462 /// 464 The block layout consists of a list of extents which map the logical 465 regions of the file to physical locations on a volume. The "storage 466 offset" field within each extent identifies a location on the logical 467 volume specified by the "vol_id" field in the extent. The vol_id 468 itself is shorthand for the whole topology of the logical volume on 469 which the file is stored. The client is responsible for translating 470 this logical offset into an offset on the appropriate underlying SAN 471 logical unit. In most cases all extents in a layout will reside on 472 the same volume and thus have the same vol_id. In the case of copy 473 on write file systems, the PNFS_BLOCK_READ_DATA extents may have a 474 different vol_id from the writable extents. 476 Each extent maps a logical region of the file onto a portion of the 477 specified logical volume. The file_offset, extent_length, and es 478 fields for an extent returned from the server are valid for all 479 extents. In contrast, the interpretation of the storage_offset field 480 depends on the value of es as follows (in increasing order): 482 o PNFS_BLOCK_READ_WRITE_DATA means that storage_offset is valid, and 483 points to valid/initialized data that can be read and written. 485 o PNFS_BLOCK_READ_DATA means that storage_offset is valid and points 486 to valid/ initialized data which can only be read. Write 487 operations are prohibited; the client may need to request a read- 488 write layout. 490 o PNFS_BLOCK_INVALID_DATA means that storage_offset is valid, but 491 points to invalid un-initialized data. This data must not be 492 physically read from the disk until it has been initialized. A 493 read request for a PNFS_BLOCK_INVALID_DATA extent must fill the 494 user buffer with zeros, unless the extent is covered by a 495 PNFS_BLOCK_READ_DATA extent of a copy-on-write file system. Write 496 requests must write whole server-sized blocks to the disk; bytes 497 not initialized by the user must be set to zero. Any write to 498 storage in a PNFS_BLOCK_INVALID_DATA extent changes the written 499 portion of the extent to PNFS_BLOCK_READ_WRITE_DATA; the pNFS 500 client is responsible for reporting this change via LAYOUTCOMMIT. 502 o PNFS_BLOCK_NONE_DATA means that storage_offset is not valid, and 503 this extent may not be used to satisfy write requests. Read 504 requests may be satisfied by zero-filling as for 505 PNFS_BLOCK_INVALID_DATA. PNFS_BLOCK_NONE_DATA extents may be 506 returned by requests for readable extents; they are never returned 507 if the request was for a writeable extent. 509 An extent list lists all relevant extents in increasing order of the 510 file_offset of each extent; any ties are broken by increasing order 511 of the extent state (es). 513 2.3.1. Layout Requests and Extent Lists 515 Each request for a layout specifies at least three parameters: file 516 offset, desired size, and minimum size. If the status of a request 517 indicates success, the extent list returned must meet the following 518 criteria: 520 o A request for a readable (but not writeable) layout returns only 521 PNFS_BLOCK_READ_DATA or PNFS_BLOCK_NONE_DATA extents (but not 522 PNFS_BLOCK_INVALID_DATA or PNFS_BLOCK_READ_WRITE_DATA extents). 524 o A request for a writeable layout returns 525 PNFS_BLOCK_READ_WRITE_DATA or PNFS_BLOCK_INVALID_DATA extents (but 526 not PNFS_BLOCK_NONE_DATA extents). It may also return 527 PNFS_BLOCK_READ_DATA extents only when the offset ranges in those 528 extents are also covered by PNFS_BLOCK_INVALID_DATA extents to 529 permit writes. 531 o The first extent in the list MUST contain the requested starting 532 offset. 534 o The total size of extents within the requested range MUST cover at 535 least the minimum size. One exception is allowed: the total size 536 MAY be smaller if only readable extents were requested and EOF is 537 encountered. 539 o Extents in the extent list MUST be logically contiguous for a 540 read-only layout. For a read-write layout, the set of writable 541 extents (i.e., excluding PNFS_BLOCK_READ_DATA extents) MUST be 542 logically contiguous. Every PNFS_BLOCK_READ_DATA extent in a 543 read-write layout MUST be covered by one or more 544 PNFS_BLOCK_INVALID_DATA extents. This overlap of 545 PNFS_BLOCK_READ_DATA and PNFS_BLOCK_INVALID_DATA extents is the 546 only permitted extent overlap. 548 o Extents MUST be ordered in the list by starting offset, with 549 PNFS_BLOCK_READ_DATA extents preceding PNFS_BLOCK_INVALID_DATA 550 extents in the case of equal file_offsets. 552 2.3.2. Layout Commits 554 ////* block layout specific type for lou_body */ 555 ///struct pnfs_block_layoutupdate4 { 556 /// pnfs_block_extent4 commit_list<>; /* list of extents which 557 /// * now contain valid data. 558 /// */ 559 ///}; 560 /// 562 The "pnfs_block_layoutupdate4" structure is used by the client as the 563 block-protocol specific argument in a LAYOUTCOMMIT operation. The 564 "commit_list" field is an extent list covering regions of the file 565 layout that were previously in the PNFS_BLOCK_INVALID_DATA state, but 566 have been written by the client and should now be considered in the 567 PNFS_BLOCK_READ_WRITE_DATA state. The es field of each extent in the 568 commit_list MUST be set to PNFS_BLOCK_READ_WRITE_DATA. The extents 569 in the commit list MUST be disjoint and MUST be sorted by 570 file_offset. The storage_offset field is unused. Implementers 571 should be aware that a server may be unable to commit regions at a 572 granularity smaller than a file-system block (typically 4KB or 8KB). 573 As noted above, the block-size that the server uses is available as 574 an NFSv4 attribute, and any extents included in the "commit_list" 575 MUST be aligned to this granularity and have a size that is a 576 multiple of this granularity. If the client believes that its 577 actions have moved the end-of-file into the middle of a block being 578 committed, the client MUST write zeroes from the end-of-file to the 579 end of that block before committing the block. Failure to do so may 580 result in junk (uninitialized data) appearing in that area if the 581 file is subsequently extended by moving the end-of-file. 583 2.3.3. Layout Returns 585 The LAYOUTRETURN operation is done without any block layout specific 586 data. When the LAYOUTRETURN operation specifies a 587 LAYOUTRETURN4_FILE_return type, then the layoutreturn_file4 data 588 structure specifies the region of the file layout that is no longer 589 needed by the client. The opaque "lrf_body" field of the 590 "layoutreturn_file4" data structure MUST have length zero. A 591 LAYOUTRETURN operation represents an explicit release of resources by 592 the client, usually done for the purpose of avoiding unnecessary 593 CB_LAYOUTRECALL operations in the future. The client may return 594 disjoint regions of the file by using multiple LAYOUTRETURN 595 operations within a single COMPOUND operation. 597 Note that the block/volume layout supports unilateral layout 598 revocation. When a layout is unilaterally revoked by the server, 599 usually due to the client's lease time expiring, or a delegation 600 being recalled, or the client failing to return a layout in a timely 601 manner, it is important for the sake of correctness that any in- 602 flight I/Os that the client issued before the layout was revoked are 603 rejected at the storage. For the block/volume protocol, this is 604 possible by fencing a client with an expired layout timer from the 605 physical storage. Note, however, that the granularity of this 606 operation can only be at the host/logical-unit level. Thus, if one 607 of a client's layouts is unilaterally revoked by the server, it will 608 effectively render useless *all* of the client's layouts for files 609 located on the storage units comprising the logical volume. This may 610 render useless the client's layouts for files in other file systems. 612 2.3.4. Client Copy-on-Write Processing 614 Copy-on-write is a mechanism used to support file and/or file system 615 snapshots. When writing to unaligned regions, or to regions smaller 616 than a file system block, the writer must copy the portions of the 617 original file data to a new location on disk. This behavior can 618 either be implemented on the client or the server. The paragraphs 619 below describe how a pNFS block layout client implements access to a 620 file which requires copy-on-write semantics. 622 Distinguishing the PNFS_BLOCK_READ_WRITE_DATA and 623 PNFS_BLOCK_READ_DATA extent types in combination with the allowed 624 overlap of PNFS_BLOCK_READ_DATA extents with PNFS_BLOCK_INVALID_DATA 625 extents allows copy-on-write processing to be done by pNFS clients. 626 In classic NFS, this operation would be done by the server. Since 627 pNFS enables clients to do direct block access, it is useful for 628 clients to participate in copy-on-write operations. All block/volume 629 pNFS clients MUST support this copy-on-write processing. 631 When a client wishes to write data covered by a PNFS_BLOCK_READ_DATA 632 extent, it MUST have requested a writable layout from the server; 633 that layout will contain PNFS_BLOCK_INVALID_DATA extents to cover all 634 the data ranges of that layout's PNFS_BLOCK_READ_DATA extents. More 635 precisely, for any file_offset range covered by one or more 636 PNFS_BLOCK_READ_DATA extents in a writable layout, the server MUST 637 include one or more PNFS_BLOCK_INVALID_DATA extents in the layout 638 that cover the same file_offset range. When performing a write to 639 such an area of a layout, the client MUST effectively copy the data 640 from the PNFS_BLOCK_READ_DATA extent for any partial blocks of 641 file_offset and range, merge in the changes to be written, and write 642 the result to the PNFS_BLOCK_INVALID_DATA extent for the blocks for 643 that file_offset and range. That is, if entire blocks of data are to 644 be overwritten by an operation, the corresponding 645 PNFS_BLOCK_READ_DATA blocks need not be fetched, but any partial- 646 block writes must be merged with data fetched via 647 PNFS_BLOCK_READ_DATA extents before storing the result via 648 PNFS_BLOCK_INVALID_DATA extents. For the purposes of this 649 discussion, "entire blocks" and "partial blocks" refer to the 650 server's file-system block size. Storing of data in a 651 PNFS_BLOCK_INVALID_DATA extent converts the written portion of the 652 PNFS_BLOCK_INVALID_DATA extent to a PNFS_BLOCK_READ_WRITE_DATA 653 extent; all subsequent reads MUST be performed from this extent; the 654 corresponding portion of the PNFS_BLOCK_READ_DATA extent MUST NOT be 655 used after storing data in a PNFS_BLOCK_INVALID_DATA extent. If a 656 client writes only a portion of an extent, the extent may be split at 657 block aligned boundaries. 659 When a client wishes to write data to a PNFS_BLOCK_INVALID_DATA 660 extent that is not covered by a PNFS_BLOCK_READ_DATA extent, it MUST 661 treat this write identically to a write to a file not involved with 662 copy-on-write semantics. Thus, data must be written in at least 663 block size increments, aligned to multiples of block sized offsets, 664 and unwritten portions of blocks must be zero filled. 666 In the LAYOUTCOMMIT operation that normally sends updated layout 667 information back to the server, for writable data, some 668 PNFS_BLOCK_INVALID_DATA extents may be committed as 669 PNFS_BLOCK_READ_WRITE_DATA extents, signifying that the storage at 670 the corresponding storage_offset values has been stored into and is 671 now to be considered as valid data to be read. PNFS_BLOCK_READ_DATA 672 extents are not committed to the server. For extents that the client 673 receives via LAYOUTGET as PNFS_BLOCK_INVALID_DATA and returns via 674 LAYOUTCOMMIT as PNFS_BLOCK_READ_WRITE_DATA, the server will 675 understand that the PNFS_BLOCK_READ_DATA mapping for that extent is 676 no longer valid or necessary for that file. 678 2.3.5. Extents are Permissions 680 Layout extents returned to pNFS clients grant permission to read or 681 write; PNFS_BLOCK_READ_DATA and PNFS_BLOCK_NONE_DATA are read-only 682 (PNFS_BLOCK_NONE_DATA reads as zeroes), PNFS_BLOCK_READ_WRITE_DATA 683 and PNFS_BLOCK_INVALID_DATA are read/write, (PNFS_BLOCK_INVALID_DATA 684 reads as zeros, any write converts it to PNFS_BLOCK_READ_WRITE_DATA). 685 This is the only client means of obtaining permission to perform 686 direct I/O to storage devices; a pNFS client MUST NOT perform direct 687 I/O operations that are not permitted by an extent held by the 688 client. Client adherence to this rule places the pNFS server in 689 control of potentially conflicting storage device operations, 690 enabling the server to determine what does conflict and how to avoid 691 conflicts by granting and recalling extents to/from clients. 693 Block/volume class storage devices are not required to perform read 694 and write operations atomically. Overlapping concurrent read and 695 write operations to the same data may cause the read to return a 696 mixture of before-write and after-write data. Overlapping write 697 operations can be worse, as the result could be a mixture of data 698 from the two write operations; data corruption can occur if the 699 underlying storage is striped and the operations complete in 700 different orders on different stripes. A pNFS server can avoid these 701 conflicts by implementing a single writer XOR multiple readers 702 concurrency control policy when there are multiple clients who wish 703 to access the same data. This policy SHOULD be implemented when 704 storage devices do not provide atomicity for concurrent read/write 705 and write/write operations to the same data. 707 If a client makes a layout request that conflicts with an existing 708 layout delegation, the request will be rejected with the error 709 NFS4ERR_LAYOUTTRYLATER. This client is then expected to retry the 710 request after a short interval. During this interval the server 711 SHOULD recall the conflicting portion of the layout delegation from 712 the client that currently holds it. This reject-and-retry approach 713 does not prevent client starvation when there is contention for the 714 layout of a particular file. For this reason a pNFS server SHOULD 715 implement a mechanism to prevent starvation. One possibility is that 716 the server can maintain a queue of rejected layout requests. Each 717 new layout request can be checked to see if it conflicts with a 718 previous rejected request, and if so, the newer request can be 719 rejected. Once the original requesting client retries its request, 720 its entry in the rejected request queue can be cleared, or the entry 721 in the rejected request queue can be removed when it reaches a 722 certain age. 724 NFSv4 supports mandatory locks and share reservations. These are 725 mechanisms that clients can use to restrict the set of I/O operations 726 that are permissible to other clients. Since all I/O operations 727 ultimately arrive at the NFSv4 server for processing, the server is 728 in a position to enforce these restrictions. However, with pNFS 729 layouts, I/Os will be issued from the clients that hold the layouts 730 directly to the storage devices that host the data. These devices 731 have no knowledge of files, mandatory locks, or share reservations, 732 and are not in a position to enforce such restrictions. For this 733 reason the NFSv4 server MUST NOT grant layouts that conflict with 734 mandatory locks or share reservations. Further, if a conflicting 735 mandatory lock request or a conflicting open request arrives at the 736 server, the server MUST recall the part of the layout in conflict 737 with the request before granting the request. 739 2.3.6. End-of-file Processing 741 The end-of-file location can be changed in two ways: implicitly as 742 the result of a WRITE or LAYOUTCOMMIT beyond the current end-of-file, 743 or explicitly as the result of a SETATTR request. Typically, when a 744 file is truncated by an NFSv4 client via the SETATTR call, the server 745 frees any disk blocks belonging to the file which are beyond the new 746 end-of-file byte, and MUST write zeros to the portion of the new end- 747 of-file block beyond the new end-of-file byte. These actions render 748 any pNFS layouts which refer to the blocks that are freed or written 749 semantically invalid. Therefore, the server MUST recall from clients 750 the portions of any pNFS layouts which refer to blocks that will be 751 freed or written by the server before processing the truncate 752 request. These recalls may take time to complete; as explained in 753 [NFSv4.1], if the server cannot respond to the client SETATTR request 754 in a reasonable amount of time, it SHOULD reply to the client with 755 the error NFS4ERR_DELAY. 757 Blocks in the PNFS_BLOCK_INVALID_DATA state which lie beyond the new 758 end-of-file block present a special case. The server has reserved 759 these blocks for use by a pNFS client with a writable layout for the 760 file, but the client has yet to commit the blocks, and they are not 761 yet a part of the file mapping on disk. The server MAY free these 762 blocks while processing the SETATTR request. If so, the server MUST 763 recall any layouts from pNFS clients which refer to the blocks before 764 processing the truncate. If the server does not free the 765 PNFS_BLOCK_INVALID_DATA blocks while processing the SETATTR request, 766 it need not recall layouts which refer only to the PNFS_BLOCK_INVALID 767 DATA blocks. 769 When a file is extended implicitly by a WRITE or LAYOUTCOMMIT beyond 770 the current end-of-file, or extended explicitly by a SETATTR request, 771 the server need not recall any portions of any pNFS layouts. 773 2.3.7. Layout Hints 775 The SETATTR operation supports a layout hint attribute [NFSv4.1]. 776 When the client sets a layout hint (data type layouthint4) with a 777 layout type of LAYOUT4_BLOCK_VOLUME (the loh_type field), the 778 loh_body field contains a value of data type pnfs_block_layouthint4. 780 ////* block layout specific type for loh_body */ 781 ///struct pnfs_block_layouthint4 { 782 /// uint64_t maximum_io_time; /* maximum i/o time in seconds 783 /// */ 784 ///}; 785 /// 787 The block layout client uses the layout hint data structure to 788 communicate to the server the maximum time that it may take an I/O to 789 execute on the client. Clients using block layouts MUST set the 790 layout hint attribute before using LAYOUTGET operations. 792 2.3.8. Client Fencing 794 The pNFS block protocol must handle situations in which a system 795 failure, typically a network connectivity issue, requires the server 796 to unilaterally revoke extents from one client in order to transfer 797 the extents to another client. The pNFS server implementation MUST 798 ensure that when resources are transferred to another client, they 799 are not used by the client originally owning them, and this must be 800 ensured against any possible combination of partitions and delays 801 among all of the participants to the protocol (server, storage and 802 client). Two approaches to guaranteeing this isolation are possible 803 and are discussed below. 805 One implementation choice for fencing the block client from the block 806 storage is the use of LUN (Logical Unit Number) masking or mapping at 807 the storage systems or storage area network to disable access by the 808 client to be isolated. This requires server access to a management 809 interface for the storage system and authorization to perform LUN 810 masking and management operations. For example, SMI-S [SMIS] 811 provides a means to discover and mask LUNs, including a means of 812 associating clients with the necessary World Wide Names or Initiator 813 names to be masked. 815 In the absence of support for LUN masking, the server has to rely on 816 the clients to implement a timed lease I/O fencing mechanism. 817 Because clients do not know if the server is using LUN masking, in 818 all cases the client MUST implement timed lease fencing. In timed 819 lease fencing we define two time periods, the first, "lease_time" is 820 the length of a lease as defined by the server's lease_time attribute 821 (see [NFSV4.1]), and the second, "maximum_io_time" is the maximum 822 time it can take for a client I/O to the storage system to either 823 complete or fail; this value is often 30 seconds or 60 seconds, but 824 may be longer in some environments. If the maximum client I/O time 825 cannot be bounded, the client MUST use a value of all 1s as the 826 maximum_io_time. 828 The client MUST use SETATTR with a layout hint of type 829 LAYOUT4_BLOCK_VOLUME to inform the server of its maximum_I/O time 830 prior to issuing the first LAYOUTGET operation. The maximum io time 831 hint is a per client attribute, and as such the server SHOULD 832 maintain the value set by each client. A server which implements 833 fencing via LUN masking SHOULD accept any maximum io time value from 834 a client. A server which does not implement fencing may return an 835 error NFS4ERR_INVAL to the SETATTR operation. Such a server SHOULD 836 return NFS4ERR_INVAL when a client sends an unbounded maximum I/O 837 time (all 1s), or when the maximum I/O time is significantly greater 838 than that of other clients using block layouts with pNFS. 840 When a client receives the error NFS4ERR_INVAL in response to the 841 SETATTR operation for a layout hint, the client MUST NOT use the 842 LAYOUTGET operation. After responding with NFS4ERR_INVAL to the 843 SETATTR for layout hint, the server MUST return the error 844 NFS4ERR_LAYOUTUNAVAILABLE to all subsequent LAYOUTGET operations from 845 that client. Thus the server, by returning either NFS4ERR_INVAL or 846 NFS4_OK determines whether or not a client with a large, or an 847 unbounded maximum I/O time may use pNFS. 849 Using the lease time and the maximum i/o time values, we specify the 850 behavior of the client and server as follows. 852 When a client receives layout information via a LAYOUTGET operation, 853 those layouts are valid for at most "lease_time" seconds from when 854 the server granted them. A layout is renewed by any successful 855 SEQUEUNCE operation, or whenever a new stateid is created or updated 856 (see the section "Lease Renewal" of [NFSV4.1]). If the layout lease 857 is not renewed prior to expiration, the client MUST cease to use the 858 layout after "lease_time" seconds from when it either sent the 859 original LAYOUTGET command, or sent the last operation renewing the 860 lease. In other words, the client may not issue any I/O to blocks 861 specified by an expired layout. In the presence of large 862 communication delays between the client and server it is even 863 possible for the lease to expire prior to the server response 864 arriving at the client. In such a situation the client MUST NOT use 865 the expired layouts, and SHOULD revert to using standard NFSv41 READ 866 and WRITE operations. Furthermore, the client must be configured 867 such that I/O operations complete within the "maximum_io_time" even 868 in the presence of multipath drivers that will retry I/Os via 869 multiple paths. 871 As stated in the section "Dealing with Lease Expiration on the 872 Client" of [NFSV4.1], if any SEQUENCE operation is successful, but 873 sr_status_flag has SEQ4_STATUS_EXPIRED_ALL_STATE_REVOKED, 874 SEQ4_STATUS_EXPIRED_SOME_STATE_REVOKED, or 875 SEQ4_STATUS_ADMIN_STATE_REVOKED set, the client MUST immediately 876 cease to use all layouts and device id to device address mappings 877 associated with the corresponding server. 879 In the absence of known two way communication between the client and 880 the server on the fore channel, the server must wait for at least the 881 time period "lease_time" plus "maximum_io_time" before transferring 882 layouts from the original client to any other client. The server, 883 like the client, must take a conservative approach, and start the 884 lease expiration timer from the time that it received the operation 885 which last renewed the lease. 887 2.4. Crash Recovery Issues 889 When the server crashes while the client holds a writable layout, and 890 the client has written data to blocks covered by the layout, and the 891 blocks are still in the PNFS_BLOCK_INVALID_DATA state, the client has 892 two options for recovery. If the data that has been written to these 893 blocks is still cached by the client, the client can simply re-write 894 the data via NFSv4, once the server has come back online. However, 895 if the data is no longer in the client's cache, the client MUST NOT 896 attempt to source the data from the data servers. Instead, it should 897 attempt to commit the blocks in question to the server during the 898 server's recovery grace period, by sending a LAYOUTCOMMIT with the 899 "loca_reclaim" flag set to true. This process is described in detail 900 in [NFSv4.1] section 18.42.4. 902 2.5. Recalling resources: CB_RECALL_ANY 904 The server may decide that it cannot hold all of the state for 905 layouts without running out of resources. In such a case, it is free 906 to recall individual layouts using CB_LAYOUTRECALL to reduce the 907 load, or it may choose to request that the client return any layout. 909 For the block layout we define the following bit 911 ///const RCA4_BLK_LAYOUT_RECALL_ANY_LAYOUTS = 4; 913 When the server sends a CB_RECALL_ANY request to a client specifying 914 the RCA4_BLK_LAYOUT_RECALL_ANY_LAYOUTS bit in craa_type_mask, the 915 client should immediately respond with NFS4_OK, and then 916 asynchronously return complete file layouts until the number of files 917 with layouts cached on the client is less the craa_object_to_keep. 919 The block layout does not currently use bits 5, 6 or 7. If any of 920 these bits are set, the client should return NFS4ERR_INVAL. 922 2.6. Transient and Permanent Errors 924 The server may respond to LAYOUTGET with a variety of error statuses. 925 These errors can convey transient conditions or more permanent 926 conditions that are unlikely to be resolved soon. 928 The transient errors, NFS4ERR_RECALLCONFLICT and NFS4ERR_TRYLATER are 929 used to indicate that the server cannot immediately grant the layout 930 to the client. In the former case this is because the server has 931 recently issued a CB_LAYOUTRECALL to the requesting client, whereas 932 in the case of NFS4ERR_TRYLATER, the server cannot grant the request 933 possibly due to sharing conflicts with other clients. In either 934 case, a reasonable approach for the client is to wait several 935 milliseconds and retry the request. The client SHOULD track the 936 number of retries, and if forward progress is not made, the client 937 SHOULD send the READ or WRITE operation directly to the server. 939 The error NFS4ERR_LAYOUTUNAVAILABLE may be returned by the server if 940 layouts are not supported for the requested file or its containing 941 file system. The server may also return this error code if the 942 server is the progress of migrating the file from secondary storage, 943 or for any other reason which causes the server to be unable to 944 supply the layout. As a result of receiving 945 NFS4ERR_LAYOUTUNAVAILABLE, the client SHOULD send future READ and 946 WRITE requests directly to the server. It is expected that a client 947 will not cache the file's layoutunavailable state forever, particular 948 if the file is closed, and thus eventually, the client MAY reissue a 949 LAYOUTGET operation. 951 3. Security Considerations 953 Typically, SAN disk arrays and SAN protocols provide access control 954 mechanisms (access-logics, lun masking, etc.) which operate at the 955 granularity of individual hosts. The functionality provided by such 956 mechanisms makes it possible for the server to "fence" individual 957 client machines from certain physical disks---that is to say, to 958 prevent individual client machines from reading or writing to certain 959 physical disks. Finer-grained access control methods are not 960 generally available. For this reason, certain security 961 responsibilities are delegated to pNFS clients for block/volume 962 layouts. Block/volume storage systems generally control access at a 963 volume granularity, and hence pNFS clients have to be trusted to only 964 perform accesses allowed by the layout extents they currently hold 965 (e.g., and not access storage for files on which a layout extent is 966 not held). In general, the server will not be able to prevent a 967 client which holds a layout for a file from accessing parts of the 968 physical disk not covered by the layout. Similarly, the server will 969 not be able to prevent a client from accessing blocks covered by a 970 layout that it has already returned. This block-based level of 971 protection must be provided by the client software. 973 An alternative method of block/volume protocol use is for the storage 974 devices to export virtualized block addresses, which do reflect the 975 files to which blocks belong. These virtual block addresses are 976 exported to pNFS clients via layouts. This allows the storage device 977 to make appropriate access checks, while mapping virtual block 978 addresses to physical block addresses. In environments where the 979 security requirements are such that client-side protection from 980 access to storage outside of the layout is not sufficient pNFS 981 block/volume storage layouts for pNFS SHOULD NOT be used, unless the 982 storage device is able to implement the appropriate access checks, 983 via use of virtualized block addresses, or other means. 985 This also has implications for some NFSv4 functionality outside pNFS. 986 For instance, if a file is covered by a mandatory read-only lock, the 987 server can ensure that only readable layouts for the file are granted 988 to pNFS clients. However, it is up to each pNFS client to ensure 989 that the readable layout is used only to service read requests, and 990 not to allow writes to the existing parts of the file. Since 991 block/volume storage systems are generally not capable of enforcing 992 such file-based security, in environments where pNFS clients cannot 993 be trusted to enforce such policies, pNFS block/volume storage 994 layouts SHOULD NOT be used. 996 Access to block/volume storage is logically at a lower layer of the 997 I/O stack than NFSv4, and hence NFSv4 security is not directly 998 applicable to protocols that access such storage directly. Depending 999 on the protocol, some of the security mechanisms provided by NFSv4 1000 (e.g., encryption, cryptographic integrity) may not be available, or 1001 may be provided via different means. At one extreme, pNFS with 1002 block/volume storage can be used with storage access protocols (e.g., 1003 parallel SCSI) that provide essentially no security functionality. 1004 At the other extreme, pNFS may be used with storage protocols such as 1005 iSCSI that provide significant functionality. It is the 1006 responsibility of those administering and deploying pNFS with a 1007 block/volume storage access protocol to ensure that appropriate 1008 protection is provided to that protocol (physical security is a 1009 common means for protocols not based on IP). In environments where 1010 the security requirements for the storage protocol cannot be met, 1011 pNFS block/volume storage layouts SHOULD NOT be used. 1013 When security is available for a storage protocol, it is generally at 1014 a different granularity and with a different notion of identity than 1015 NFSv4 (e.g., NFSv4 controls user access to files, iSCSI controls 1016 initiator access to volumes). The responsibility for enforcing 1017 appropriate correspondences between these security layers is placed 1018 upon the pNFS client. As with the issues in the first paragraph of 1019 this section, in environments where the security requirements are 1020 such that client-side protection from access to storage outside of 1021 the layout is not sufficient, pNFS block/volume storage layouts 1022 SHOULD NOT be used. 1024 4. Conclusions 1026 This draft specifies the block/volume layout type for pNFS and 1027 associated functionality. 1029 5. IANA Considerations 1031 There are no IANA considerations in this document. All pNFS IANA 1032 Considerations are covered in [NFSV4.1]. 1034 6. Acknowledgments 1036 This draft draws extensively on the authors' familiarity with the 1037 mapping functionality and protocol in EMC's MPFS (previously named 1038 HighRoad) system [MPFS]. The protocol used by MPFS is called FMP 1039 (File Mapping Protocol); it is an add-on protocol that runs in 1040 parallel with file system protocols such as NFSv3 to provide pNFS- 1041 like functionality for block/volume storage. While drawing on FMP, 1042 the data structures and functional considerations in this draft 1043 differ in significant ways, based on lessons learned and the 1044 opportunity to take advantage of NFSv4 features such as COMPOUND 1045 operations. The design to support pNFS client participation in copy- 1046 on-write is based on text and ideas contributed by Craig Everhart 1047 (formerly with IBM). 1049 Andy Adamson, Richard Chandler, Benny Halevy, Fredric Isaman, and 1050 Mario Wurzl all helped to review drafts of this specification. 1052 7. References 1054 7.1. Normative References 1056 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1057 Requirement Levels", BCP 14, RFC 2119, March 1997. 1059 [NFSV4.1] Shepler, S., Eisler, M., and Noveck, D. ed., "NFSv4 Minor 1060 Version 1", draft-ietf-nfsv4-minorversion1-14.txt, Internet 1061 Draft, July 2007. 1063 [XDR] Eisler, M., "XDR: External Data Representation Standard", 1064 STD 67, RFC 4506, May 2006. 1066 7.2. Informative References 1068 [MPFS] EMC Corporation, "EMC Celerra Multi-Path File System", EMC 1069 Data Sheet, available at: 1070 http://www.emc.com/collateral/software/data-sheet/h2006-celerra-mpfs- 1071 mpfsi.pdf 1072 link checked 13 March 2008 1074 [SMIS] SNIA, "SNIA Storage Management Initiative Specification", 1075 version 1.0.2, available at: 1076 http://www.snia.org/tech_activities/standards/curr_standards/smi/SMI- 1077 S_Technical_Position_v1.0.3r1.pdf 1078 link checked 13 March 2008 1080 Authors' Addresses 1082 David L. Black 1083 EMC Corporation 1084 176 South Street 1085 Hopkinton, MA 01748 1087 Phone: +1 (508) 293-7953 1088 Email: black_david@emc.com 1089 Stephen Fridella 1090 EMC Corporation 1091 228 South Street 1092 Hopkinton, MA 01748 1094 Phone: +1 (508) 249-3528 1095 Email: fridella_stephen@emc.com 1097 Jason Glasgow 1098 EMC Corporation 1099 32 Coslin Drive 1100 Southboro, MA 01772 1102 Phone: +1 (508) 305 8831 1103 Email: glasgow_jason@emc.com 1105 Intellectual Property Statement 1107 The IETF takes no position regarding the validity or scope of any 1108 Intellectual Property Rights or other rights that might be claimed to 1109 pertain to the implementation or use of the technology described in 1110 this document or the extent to which any license under such rights 1111 might or might not be available; nor does it represent that it has 1112 made any independent effort to identify any such rights. Information 1113 on the procedures with respect to rights in RFC documents can be 1114 found in BCP 78 and BCP 79. 1116 Copies of IPR disclosures made to the IETF Secretariat and any 1117 assurances of licenses to be made available, or the result of an 1118 attempt made to obtain a general license or permission for the use of 1119 such proprietary rights by implementers or users of this 1120 specification can be obtained from the IETF on-line IPR repository at 1121 http://www.ietf.org/ipr. 1123 The IETF invites any interested party to bring to its attention any 1124 copyrights, patents or patent applications, or other proprietary 1125 rights that may cover technology that may be required to implement 1126 this standard. Please address the information to the IETF at ietf- 1127 ipr@ietf.org. 1129 Disclaimer of Validity 1131 This document and the information contained herein are provided on an 1132 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1133 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1134 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1135 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1136 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1137 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1139 Copyright Statement 1141 Copyright (C) The IETF Trust (2008). 1143 This document is subject to the rights, licenses and restrictions 1144 contained in BCP 78, and except as set forth therein, the authors 1145 retain all their rights. 1147 Acknowledgment 1149 Funding for the RFC Editor function is currently provided by the 1150 Internet Society.