idnits 2.17.1 draft-noveck-nfsv4-migrep-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 18. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1707. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1718. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1725. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1731. ** Found boilerplate matching RFC 3978, Section 5.4, paragraph 1 (on line 1698), which is fine, but *also* found old RFC 2026, Section 10.4C, paragraph 1 text on line 38. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 752: '...supports fs_location_info MUST support...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 793 has weird spacing: '...ponding ident...' == Line 836 has weird spacing: '...n4_item items...' == Line 1398 has weird spacing: '...s4_type typ...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 2005) is 6761 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC3530' is defined on line 1664, but no explicit reference was found in the text ** Obsolete normative reference: RFC 3530 (Obsoleted by RFC 7530) Summary: 8 errors (**), 0 flaws (~~), 6 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT David Noveck 3 Expires: April 2006 Network Appliance, Inc. 4 Rodney C. Burnett 5 IBM, Inc. 7 October 2005 9 Next Steps for NFSv4 Migration/Replication 10 draft-noveck-nfsv4-migrep-00.txt 12 Status of this Memo 14 By submitting this Internet-Draft, each author represents 15 that any applicable patent or other IPR claims of which he 16 or she is aware have been or will be disclosed, and any of 17 which he or she becomes aware will be disclosed, in 18 accordance with Section 6 of BCP 79. 20 Internet-Drafts are working documents of the Internet Engineering 21 Task Force (IETF), its areas, and its working groups. Note that 22 other groups may also distribute working documents as Internet- 23 Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six 26 months and may be updated, replaced, or obsoleted by other 27 documents at any time. It is inappropriate to use Internet-Drafts 28 as reference material or to cite them other than as "work in 29 progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt The list of 33 Internet-Draft Shadow Directories can be accessed at 34 http://www.ietf.org/shadow.html. 36 Copyright Notice 38 Copyright (C) The Internet Society (2005). All Rights Reserved. 40 Abstract 42 The fs_locations attribute in NFSv4 provides support for fs 43 migration, replication and referral. Given the current work on 44 supporting these features, and the new needs such as support for 45 global namespace, it is time to look at this area and see what 46 further development of this protocol area may be required. This 47 document makes suggestions for the further development of these 48 features in NFSv4.1 and also presents ideas for work that might be 49 done as part of future minor versions. 51 Table Of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 3 54 1.1. History . . . . . . . . . . . . . . . . . . . . . . . . 3 55 1.2. Areas to be Addressed . . . . . . . . . . . . . . . . . 3 56 2. Clarifications/Corrections to V4.0 Functionality . . . . . 4 57 2.1. Attributes Returned by GETATTR and READDIR . . . . . . . 4 58 2.1.1. fsid . . . . . . . . . . . . . . . . . . . . . . . . . 5 59 2.1.2. mounted_on_fileid . . . . . . . . . . . . . . . . . . 5 60 2.1.3. fileid . . . . . . . . . . . . . . . . . . . . . . . . 6 61 2.1.4. filehandle . . . . . . . . . . . . . . . . . . . . . . 6 62 2.2. Issues with the Error NFS4ERR_MOVED . . . . . . . . . . 6 63 2.2.1. Issue of when to check current filehandle . . . . . . 7 64 2.2.2. Issue of GETFH . . . . . . . . . . . . . . . . . . . . 7 65 2.2.3. Handling of PUTFH . . . . . . . . . . . . . . . . . . 7 66 2.2.4. Inconsistent handling of GETATTR . . . . . . . . . . . 8 67 2.2.5. Ops not allowed to return NFS4ERR_MOVED . . . . . . . 8 68 2.2.6. Summary of NFS4ERR_MOVED . . . . . . . . . . . . . . . 9 69 2.3. Issues of Incomplete Attribute Sets . . . . . . . . . . 9 70 2.3.1. Handling of attributes for READDIR . . . . . . . . . . 10 71 2.4. Referral Issues . . . . . . . . . . . . . . . . . . . . 11 72 2.4.1. Editorial Changes Related to Referrals . . . . . . . . 12 73 3. Feature Extensions . . . . . . . . . . . . . . . . . . . . 13 74 3.1. Attribute Continuity . . . . . . . . . . . . . . . . . . 13 75 3.1.1. filehandle . . . . . . . . . . . . . . . . . . . . . . 14 76 3.1.2. fileid . . . . . . . . . . . . . . . . . . . . . . . . 14 77 3.1.3. change attribute . . . . . . . . . . . . . . . . . . . 15 78 3.1.4. fsid . . . . . . . . . . . . . . . . . . . . . . . . . 15 79 3.2. Additional Attributes . . . . . . . . . . . . . . . . . 16 80 3.2.1. fs_absent . . . . . . . . . . . . . . . . . . . . . . 16 81 3.2.2. fs_location_info . . . . . . . . . . . . . . . . . . . 16 82 3.2.3. fh_replacement . . . . . . . . . . . . . . . . . . . . 28 83 3.2.4. fs_status . . . . . . . . . . . . . . . . . . . . . . 31 84 4. Migration Protocol . . . . . . . . . . . . . . . . . . . . 33 85 4.1. NFSv4.x as a Migration Protocol . . . . . . . . . . . . 34 86 Acknowledgements . . . . . . . . . . . . . . . . . . . . . 36 87 Normative References . . . . . . . . . . . . . . . . . . . 36 88 Informative References . . . . . . . . . . . . . . . . . . 37 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . 37 90 Full Copyright Statement . . . . . . . . . . . . . . . . . 37 92 1. Introduction 94 1.1. History 96 When the fs_locations attribute was introduced, it was done with 97 the expectation that a server-to-server migration protocol was in 98 the offing. Including the fs_locations-related features provided 99 client support which would be used to allow clients to use 100 migration when that protocol was developed and also could provide 101 support for vendor-specific homogeneous server migration, until 102 that point. 104 As things happened, development of a server-to-server migration 105 protocol stalled. In part, this was due to the demands of NFSv4 106 implementation itself. Also, until V4 clients which supported 107 these features were widely deployed, it was hard to justify the 108 long-term effort for a new server-to-server protocol. 110 Now that serious implementation work has begun, a number of issues 111 have been discovered with the treatment of these features in 112 RFC3530. There are no significant protocol bugs, but there are 113 numerous cases in which the text is not clear or contradictory on 114 significant points. Also, a number of suggestions have been made 115 regarding small things left undone in the original specification, 116 leading to the question of whether it is now an appropriate time to 117 rectify those inadequacies. 119 Another important development has been the idea of referrals. 120 Referrals, a limiting case of migration, were not recognized when 121 the spec was written, even though the protocol defined therein does 122 support them. See [referrals] for an explanation of referrals 123 implementation. Also, it has turned out that referrals are an 124 important building-block for the development of a global namespace 125 for NFSv4. 127 1.2. Areas to be Addressed 129 This document is motivated in large part by the opportunity 130 represented by NFSv4.1. First, this will provide a way to revise 131 the treatment of these features in the spec, to make it clearer, to 132 avoid ambiguities and contradictions, and to incorporate explicit 133 discussion of referrals into the text. 135 NFSv4.1 also affords the opportunity to provide small extensions to 136 these facilities, to make them more generally useful, in particular 137 in environments in which migration between servers of different 138 types is to be performed. Use of these features in a global- 139 namespace environment will also motivate certain extensions. 141 The remaining issue in this area is the development of a vendor- 142 independent migration mechanism. This is definitely not something 143 that can be done immediately (in v4.1) but the working group needs 144 to figure out when this effort can be revived. This document will 145 examine a somewhat lower-overhead alternative to development of a 146 separate server-to-server migration protocol. 148 The alternative that will be explored is the use of NFSv4 itself, 149 with a small set of additions, by a server operating as an NFSv4 150 client to either pull or push file system state to or from another 151 server. It seems that this sort of incremental development can 152 provide a more efficient way of getting a migration mechanism than 153 development of a new protocol that will inevitably duplicate a lot 154 of NFSv4. Since NFSv4 must have the general ability to represent 155 fs state that is accessible via NFSv4, using the core protocol as 156 the base and adding only the extensions needed to do data transfer 157 efficiently and transfer locking state should be more efficient in 158 terms of design time. The needed extensions could be introduced 159 within a minor version. It is not proposed or expected that these 160 extensions would be in NFSv4.1. 162 2. Clarifications/Corrections to V4.0 Functionality 164 All of the sub-sections below deal with the basic functionality 165 described, explicitly or implicitly, in RFC3530. While the 166 majority of the material is simply corrections, clarifications, and 167 the resolution of ambiguities, in some cases there is cleanup to 168 make things more consistent in v4.1, without adding any new 169 functionality. Functional changes are addressed in separate 170 sections. 172 2.1. Attributes Returned by GETATTR and READDIR 174 While the RFC3530 allows the server to return attributes in 175 addition to fs_locations, when GETATTR is used with a current 176 filehandle within an absent filesystem, not much guidance is given 177 to help clarify what is appropriate. Such vagueness can result in 178 serious interoperability issues. 180 Instead of simply allowing an undefined set of attributes to 181 returned, the NFSv4.1 spec should clearly define the circumstances 182 under which attributes for absent filesystems are to be returned. 184 While some leeway may be necessary to accommodate different NFSv4.1 185 servers, unnecessary leeway should be avoided. 187 In particular, there are a number of attributes which most server 188 implementations should find relatively easy to supply which are of 189 critical importance to clients, particularly in those cases in 190 which NFS4ERR_MOVED is returned when first crossing into an absent 191 file system that the client has not previously referenced, i.e. a 192 referral. 194 NFSv4.1 should require servers to return fsid for an absent file 195 system as well as fs_locations. In order for the client to 196 properly determine the boundaries of the absent filesystems, it 197 needs access to fsid. In addition when at the root of absent 198 filesystem, mounted_on_fileid needs to be returned. 200 On the other hand, a number of attributes pose difficulties when 201 returned for an absent filesystem. While not prohibiting the 202 server from returning these, the NFSv4.1 spec should explain the 203 issues which may result in problems, since these are not always 204 obvious. Handling of some specific attributes is discussed below. 206 2.1.1. fsid 208 The fsid attribute allows clients to recognize when fs boundaries 209 have been crossed. This applies also when one crosses into an 210 absent filesystem. While it might seem that returning fsid is not 211 absolutely required, since fs boundaries are also reflected, in 212 this case, by means of the fs_root field of the fs_locations 213 attribute, there are renaming issues that make this unreliable. 214 Returning fsid is necessary for clients and servers should have no 215 difficulty in providing it. 217 To avoid misunderstanding, the NFSv4.1 spec should note that the 218 fsid provided in this case is solely so that the fs boundaries can 219 be properly noted and that the fsid returned will not necessarily 220 be valid after resolution of the migration event. The logic of 221 fsid handling for NFSv4 is that fsid's are only unique within a 222 per-server context. This would seem to be a strong indication that 223 they need not be persistent when file systems are moved from server 224 to server, although RFC 3530 does not specifically address the 225 matter. 227 2.1.2. mounted_on_fileid 229 The mounted_on_fileid attribute is of particular importance to many 230 clients, in that they need this information to form a proper 231 response to a readdir() call. When a readdir() call is done within 232 UNIX, the d_ino field of each of the entries needs to have a unique 233 value normally derived from the NFSv4 fileid attribute. It is in 234 the case in which a file system boundary is crossed that using the 235 fileid attribute for this purpose, particularly when crossing into 236 an absent fs, will pose problems. Note first that the fileid 237 attribute, since it is within a new fs and thus a new fileid space, 238 will not be unique within the directory. Also, since the fs, at 239 its new location, may arrange things differently, the fileid 240 decided on at the directing server may be overridden at the target 241 server, making it of little value. Neither of these problems 242 arises in the case of mounted_on_fileid since that fileid is in the 243 context of the mounted-on fs and unique within it. 245 2.1.3. fileid 247 For reasons explained above under mounted_on_fileid, it would be 248 difficult for the referring server to provide a fileid value that 249 is of any use to the client. Given this, it seems much better for 250 the server never to return fileid values for files on an absent fs. 252 2.1.4. filehandle 254 Returning file handles for files in the absent fs, whether by use 255 of GETFH (discussed below) or by using the filehandle attribute 256 with GETATTR or READDIR poses problems for the client as the server 257 to which it is referred is likely not to assign the same filehandle 258 value to the object in question. Even though it is possible that 259 volatile filehandles may allow a change, the referring server 260 should not prejudge the issue of filehandle volatility for the 261 server which actually has the fs. By not providing the file 262 handle, the referring server allows the target server freedom to 263 choose the file handle value without constraint. 265 2.2. Issues with the Error NFS4ERR_MOVED 267 RFC3530, in addition to being somewhat unclear about the situations 268 in which NFS4ERR_MOVED is to be returned, is self-contradictory. 269 In particular in section 6.2, it is stated, "The NFS4ERR_MOVED 270 error is returned for all operations except PUTFH and GETATTR.", 271 which is contradicted by the error lists in the detailed operation 272 descriptions. Specifically, 274 o NFS4ERR_MOVED is listed as an error code for PUTFH (section 275 14.2.20), despite the statement noted above. 277 o NFS4ERR_MOVED is listed as an error code for GETATTR (section 278 14.2.7), despite the statement noted above. 280 o Despite the "all operations except" in the statement above, 281 six operations (PUTROOTFH, PUTPUBFH, RENEW, SETCLIENTID, 282 SETCLIENTID_CONFIRM, RELEASE_OWNER) are not allowed to return 283 NFS4ERR_MOVED. 285 2.2.1. Issue of when to check current filehandle 287 In providing the definition of NFS4ERR_MOVED, RFC 3530 refers to 288 the "filesystem which contains the current filehandle object" being 289 moved to another server. This has led to some confusion when 290 considering the case of operations which change the current 291 filehandle and potentially the current file system. For example, a 292 LOOKUP which causes a transition to an absent file system might be 293 supposed to result in this error. This should be clarified to make 294 it explicit that only the current filehandle at the start of the 295 operation can result in NFS4ERR_MOVED. 297 2.2.2. Issue of GETFH 299 While RFC 3530 does not make any exception for GETFH when the 300 current filehandle is within an absent filesystem, the fact that 301 GETFH is such a passive, purely interrogative operation, may lead 302 readers to wrongly suppose that an NFSERR_MOVED error will not 303 arise in this situation. Any new NFSv4 RFC should explicitly state 304 that GETFH will return this error if the current filehandle is 305 within an absent filesystem. 307 This fact has a particular importance in the case of referrals as 308 it means that filehandles within absent filesystems will never be 309 seen by clients. Filehandles not seen by clients can pose no 310 expiration or consistency issues on the target server. 312 2.2.3. Handling of PUTFH 314 As noted above, the handling of PUTFH regarding NFS4ERR_MOVED is 315 not clear in RFC3530. Part of the problem is that there is felt to 316 be a need for an exception for PUTFH, to enable the sequence PUTFH- 317 GETATTR(fs_locations). However, if one clearly establishes, as 318 should be established, that the check for an absent filesystem is 319 only to be made at the start of each operation, then no such 320 exception is required. The sequence PUTFH-GETATTR(fs_locations) 321 requires an exception for the GETATTR but not the PUTFH. 323 PUTFH can return NFS4ERR_MOVED but only if the current filehandle, 324 as established by a previous operation, is within an absent 325 filesystem. Whether the filehandle established by the PUTFH, is 326 within an absent filesystem is of no consequence in determining 327 whether such an error is returned, since the check is to be done at 328 the start of the operation. 330 2.2.4. Inconsistent handling of GETATTR 332 While, as noted above, RFC 3530 indicates that NFS4ERR_MOVED is not 333 returned for a GETATTR operation, NFS4ERR_MOVED is listed as an 334 error that can be returned by GETATTR. The best resolution for 335 this is to limit the exception for GETATTR to specific cases in 336 which it is required. 338 o If all of the attributes requested can be provided (e.g. fsid, 339 fs_locations, mounted_on_fileid in the case of the root of an 340 absent filesystem), then NFS4ERR_MOVED is not returned. 342 o If an attribute which indicates that the client is aware of 343 the likelihood of migration having happened (such as 344 fs_locations) then NFS4ERR_MOVED is not returned, irrespective 345 of what additional attributes are requested. The newly- 346 proposed attributes fs_absent and fs_location_info (see 347 sections 3.2.1 and 3.2.2) would, like fs_locations, also cause 348 NFS4ERR_MOVED not to be returned. For the rest this document, 349 the phrase "fs_locations-like attributes" is to be understood 350 as including fs_locations, and the new attributes fs_absent 351 and fs_location_info, if added to the protocol. 353 In all other cases, if the current filesystem is absent, 354 NFS4ERR_MOVED is to be returned. 356 2.2.5. Ops not allowed to return NFS4ERR_MOVED 358 As noted above, RFC 3530 does not allow the following ops to return 359 NFS4ERR_MOVED: 361 o PUTROOTFH 363 o PUTPUBFH 365 o RENEW 367 o SETCLIENTID 369 o SETCLIENTID_CONFIRM 371 o RELEASE_OWNER 372 All of these are ops which do not require a current file handle, 373 although two other ops that also do not require a current file 374 handle, DELEGPURGE and PUTFH are allowed to return NFS4ERR_MOVED. 376 There is no good reason to continue these as exceptions. In future 377 NFSv4 versions it should be the case that if there is a current 378 filehandle and the associated filesystem is not present an 379 NFS4ERR_MOVED error should result, as it does for other ops. 381 2.2.6. Summary of NFS4ERR_MOVED 383 To summarize, NFSv4.1 should: 385 o Make clear that the check for an absent filesystem is to occur 386 at the start (and only at the start) of each operation. 388 o Allow NFS4ERR_MOVED to be returned by all ops including those 389 not allowed to return it in RFC3530. 391 o Be clear about the circumstances in which GETATTR will or will 392 not return NFS4ERR_MOVED. 394 o Delete the confusing text regarding an exception for PUTFH. 396 o Make it clear that GETFH will return NFS4ERR_MOVED rather than 397 a filehandle within an absent filesystem. 399 2.3. Issues of Incomplete Attribute Sets 401 Migration or referral events naturally create situations in which 402 all of the attributes normally supported on a server are not 403 obtainable. RFC3530 is in places ambivalent and/or apparently 404 self-contradictory on such issues. Any new NFSv4 RFC should take a 405 clear position on these issues (and it should not impose undue 406 difficulties on support for migration). 408 The first problem concerns the statement in the third paragraph of 409 section 6.2: "If the client requests more attributes than just 410 fs_locations, the server may return fs_locations only. This is to 411 be expected since the server has migrated the filesystem and may 412 not have a method of obtaining additional attribute data." 414 While the above seems quite reasonable, it is seemingly 415 contradicted by the following text from section 14.2.7 the second 416 paragraph of the DESCRIPTION for GETATTR: "The server must return a 417 value for each attribute that the client requests if the attribute 418 is supported by the server. If the server does not support an 419 attribute or cannot approximate a useful value then it must not 420 return the attribute value and must not set the attribute bit in 421 the result bitmap. The server must return an error if it supports 422 an attribute but cannot obtain its value. In that case no 423 attribute values will be returned." 425 While the above is a useful restriction in that it allows clients 426 to simplify their attribute interpretation code since it allows 427 them to assume that all of the attributes they request are present 428 often making it possible to get successive attributes at fixed 429 offsets within the data stream, it seems to contradict what is said 430 in section 6.2, where it is clearly anticipated, at least when 431 fs_locations is requested, that fewer (often many fewer) attributes 432 will be available than are requested. It could be argued that you 433 could harmonize these two by being creative with the interpretation 434 of the phrase "if the attribute is supported by the server". One 435 could argue that many attributes are not supported by the server 436 for an absent fs even though the text by talking about attributes 437 "supported by a server" seems to indicate that this is not allowed 438 to be different for different fs's (which is troublesome in itself 439 as one server might have filesystems that do support and don't 440 support acl's for example). 442 Note however that the following paragraph in the description says, 443 "All servers must support the mandatory attributes as specified in 444 the section 'File Attributes'". That's reasonable enough in 445 general, but for an absent fs it is not reasonable and so section 446 14.2.7 and section 6.2 are contradictory. NFSv4.1 should remove 447 the contradiction, by making an explicit exception for the case of 448 an absent filesystem. 450 2.3.1. Handling of attributes for READDIR 452 A related issue concerns attributes in a READDIR. There has been 453 discussion, without any resolution yet, regarding the server's 454 obligation (or not) to return the attributes requested with 455 READDIR. There has been discussion of cases in which this is 456 inconvenient for the server, and an argument has been made that the 457 attributes request should be treated as a hint, since the client 458 can do a GETATTR to get requested attributes that are not supplied 459 by the server. 461 Regardless of how this issue is resolved, it needs to be made clear 462 that at least in the case of a directory that contains the roots of 463 absent filesystems, the server must not be required to return 464 attributes that it is simply unable to return, just it cannot with 465 GETATTR. 467 The following rules, derived from section 3.1 of [referrals], 468 modified for suggested attribute changes in NFSv4.1 represent a 469 good base for handling this issue, although the resolution of the 470 general issue regarding the attribute mask for READDIR will affect 471 the ultimate choices for NFSv4.1. 473 o When any of the fs_locations-like attributes is among the 474 attributes requested, the server may provide a subset of the 475 other requested attributes together with the request 476 fs_locations-like attributes for roots of absent fs's, without 477 causing any error for the READDIR as a whole. If rdattr_error 478 is also requested and there are attributes which are not 479 available, then rdattr_error will receive the value 480 NFS4ERR_MOVED. 482 o When no fs_locations-like attributes are requested, but all of 483 the attributes requested can be provided, then they will be 484 provided and no NFS4ERR_MOVED will be generated. An example 485 would be READDIR's that request mounted_on_fileid either with 486 or without fsid. 488 o When none of the fs_locations-like attributes are requested, 489 but rdattr_error is and some attributes requested are not 490 available because of the absence of the filesystem, the server 491 will return NFS4ERR_MOVED for the rdattr_error attribute and, 492 in addition, the requested attributes that are valid for the 493 root of an absent filesystem. 495 o When none of fs_locations-like attributes are requested and 496 there is a directory within an absent fs within the directory 497 being read, if some unavailable attributes are requested, the 498 handling will depend on the overall decision about READDIR 499 referred to above. If the attribute mask is to be treated as 500 a hint, only available attributes will be returned. 501 Otherwise, no data will be returned and the READDIR will get 502 an NFS4ERR_MOVED error. 504 2.4. Referral Issues 506 RFC 3530 defines a migration feature which allows the server to 507 direct clients to another server for the purpose of accessing a 508 given file system. While that document explains the feature in 509 terms of a client accessing a given file system and then finding 510 that it has moved, an important limiting case is that in which the 511 clients are redirected as part of their first attempt to access a 512 given file system. 514 2.4.1. Editorial Changes Related to Referrals 516 Given the above framework for implementing referrals, within the 517 basic migration framework described in RFC 3530, we need to 518 consider how future NFSv4 RFC's should be modified, relative to RFC 519 3530, to address referrals. 521 The most important change is to include an explanation of how 522 referrals fit into the v4 migration model. Since the existing 523 discussion does not specifically call out the case in which the 524 absence of a filesystem is noted while attempting to cross into the 525 absent file system, it makes it hard to understand how referrals 526 work and how they relate to other sorts of migration events. 528 It makes sense to present a description of referrals in a new sub- 529 section following the "Migration" section, and would be section 530 6.2.1, given the current numbering scheme of RFC 3530. The 531 material in [referrals], suitably modified for the changes proposed 532 for v4.1, would be very helpful in providing the basis for this 533 sub-section. 535 There are also a number of cases in which the existing wording of 536 RFC 3530 seems to ignore the referral case of the migration 537 feature. In the following specific cases, some suggestions are 538 made for edits to tidy this up. 540 o In section 1.4.3.3, in the third sentence of the first 541 paragraph, the phrase "In the event of a migration of a 542 filesystem" is unnecessarily restrictive and having the 543 sentence read "In the event of the absence of a filesystem, 544 the client will receive an error when operating on the 545 filesystem and it can then query the server as to the current 546 location of the file system" would be better. 548 o In section 6.2, the following should be added as a new second 549 paragraph: "Migration may be signaled when a file system is 550 absent on a given server, when the file system in question has 551 never actually been located on the server in question. In 552 such a case, the server acts to refer the client to the proper 553 fs location, using fs_locations to indicate the server 554 location, with the existence of the server as a migration 555 source being purely conventional." 557 o In the existing second paragraph of section 6.2, the first 558 sentence should be modified to read as follows: "Once a 559 filesystem has been successfully established at a new server 560 location, the error NFS4ERR_MOVED will be returned for 561 subsequent requests received by the server whose role is as 562 the source of the filesystem, whether the filesystem actually 563 resided on that server, or whether its original location was 564 purely nominal (i.e. the pure referral case)." 566 o The following should be added as an additional paragraph to 567 the end of section 6.4, the: "Note that in the case of a 568 referral, there is no issue of filehandle recovery since no 569 filehandles for the absent filesystem are communicated to the 570 client (and neither is the fh_expire_type)". 572 o The following should be added as an additional paragraph to 573 the end of section 8.14.1: "Note that in the case of referral, 574 there is no issue of state recovery since no state can have 575 been generated for the absent filesystem." 577 o In section 12, in the description of NFS4ERR_MOVED, the first 578 sentence should read, "The filesystem which contains the 579 current filehandle object is now located on another server." 581 3. Feature Extensions 583 A number of small extensions can be made within NFSv4's minor 584 versioning framework to enhance the ability to provide multi-vendor 585 implementations of migration and replication where the transition 586 from server instance to server instance is transparent to client 587 users. This includes transitions due to migration or transitions 588 among replicas due to server or network problems. These same 589 extensions would enhance the ability of the server to present 590 clients with multiple replicas in referral situation, so that the 591 most appropriate one might be selected. These extensions would all 592 be in the form of additional recommended attributes. 594 3.1. Attribute Continuity 596 There are a number of issues with the existing protocol that 597 revolve around the continuity (or lack thereof) of attribute values 598 across a migration event. In some cases, the spec is not clear 599 about whether such continuity is required and different readers may 600 make different assumptions. In other cases, continuity is not 601 required but there are significant cases in which there would be a 602 benefit and there is no way for the client to take advantage of 603 attribute continuity when it exists. A third situation is that 604 attribute continuity is generally assumed (although not specified 605 in the spec), but allowing change at a migration event would add 606 greatly to flexibility in handling a global namespace. 608 3.1.1. filehandle 610 The issue of filehandle continuity is not fully addressed in 611 RFC3530. In many cases of vendor-specific migration or replication 612 (where an entire fs image is copied, for instance), it is 613 relatively easy to provide that the same persistent filehandles 614 used on the source server be recognized on the destination server. 616 On the other hand, for many forms of migration, filehandle 617 continuity across a migration event cannot be provided, requiring 618 that filehandles be re-established. Within RFC3530, volatile 619 filehandles (FH4_VOL_MIGRATION) is the only mechanism to satisfy 620 this need and in many environments they will work fine. 622 Unfortunately, in the case in which an open file is renamed by a 623 another client, the re-establishment of the filehandle on the 624 destination target will give the wrong result and the client will 625 attempt to re-open an incorrect file on the target. 627 There needs to be a way to address this difficulty in order to 628 provide transparent switching among file system instance, both in 629 the event of migration or when transitioning among replicas. 631 3.1.2. fileid 633 RFC3530 gives no real guidance on the issue of continuity of 634 fileid's in the event of migration or a transition between two 635 replicas. The general expectation has been that in situations in 636 which the two filesystem instances are created by a single vendor 637 using some sort of filesystem image copy, fileid's will be 638 consistent across the transition while in the analogous multi- 639 vendor transitions they will not. This latter can pose some 640 difficulties. 642 It is important to note that while clients themselves may have no 643 trouble with a fileid changing as a result of a filesystem 644 transition event, applications do typically have access to the 645 fileid (e.g. via stat), and the result of this is that an 646 application may work perfectly well if there is no filesystem 647 instance transition or if any such transition is among instances 648 created by a single vendor, yet be unable to deal with the 649 situation in which a multi-vendor transition occurs, at the wrong 650 time. 652 Providing the same fileid's in a multi-vendor (multiple server 653 vendors) environment has generally been held to be quite difficult. 654 While there is work to be done, it needs to be pointed out that 655 this difficulty is partly self-imposed. Servers have typically 656 identified fileid with inode number, i.e. with a quantity used to 657 find the file in question. This identification poses special 658 difficulties for migration of an fs between vendors where assigning 659 the same index to a given file may not be possible. Note here that 660 a fileid does not require that it be useful to find the file in 661 question, only that it is unique within the given fs. Servers 662 prepared to accept a fileid as a single piece of metadata and store 663 it apart from the value used to index the file information can 664 relatively easily maintain a fileid value across a migration event, 665 allowing a truly transparent migration event. 667 In any case, where servers can provide continuity of fileids, they 668 should and the client should be able to find out that such 669 continuity is available, and take appropriate action. 671 3.1.3. change attribute 673 Currently the change attribute is defined as strictly the province 674 of the server, making it necessary for the client to re-establish 675 the change attribute value on the new server. This has the further 676 consequence that the lack of continuity between change values on 677 the source and destination servers creates a window during which we 678 have no reliable way of determining whether caches are still valid. 679 Where there is a transition among writable filesystem instances, 680 even if most of the access is for reading (in fact particularly if 681 it is), the can be a big performance issue. 683 Where the co-operating servers can provide continuity of change 684 number across the migration event, the client should be able to 685 determine this fact and use this knowledge to avoid unneeded 686 attribute fetches and client cache flushes. 688 3.1.4. fsid 690 Although RFC3530 does not say so explicitly, it has been the 691 general expectation that although the fsid is expected to change as 692 part of migration (since the fsid space is per-server), the 693 boundaries of a server when migrated will be the same as they were 694 on the source. 696 The possibility of splitting an existing filesystem into two or 697 more as part of migration can provide important additional 698 functionality in a global namespace environment. When one divides 699 up pieces of a global namespace into convenient-sized fs's (to 700 allow their independent assignment to individual servers), 701 difficulties will arise over time. As the sizes of directories 702 grow, what was once a convenient set of files, embodied as a 703 separate fs, may become inconveniently large. This requires a 704 means to divide it into a new set of pieces which are of a 705 convenient size. The important point is that while there are many 706 ways to do that currently, they are all disruptive. A method is 707 needed which allows this division to occur without disrupting 708 access. 710 3.2. Additional Attributes 712 A small number of additional attributes in V4.1 can provide 713 significant additional functionality, by addressing the attribute 714 continuity issues discussed above and allowing more complete 715 information about the possible replicas, post-migration locations, 716 or referral targets for a given filesystem that allows the client 717 to choose the one most suited to its needs, and to more effectively 718 handle the transition to a new target server. 720 All of the proposed attributes would be defined as validly 721 requested when the current filehandle is within an absent 722 filesystem, i.e. an attempt to obtain these attributes would not 723 result in NFS4ERR_MOVED. In some cases, it may be optional to 724 actually provide the requested attribute information based on the 725 presence or absence of the filesystem. The specifics will be 726 discussed under each of the individual attributes. 728 3.2.1. fs_absent 730 In NFSv4.0, fs_locations is the only attribute which, when fetched, 731 indicates that the client is aware of the possibility that the 732 current filesystem may be absent. Since fs_locations is a 733 complicated attribute and the client may simply want an indication 734 of whether the filesystem is present, we propose the addition of a 735 boolean attribute named "fs_absent" to provide this information 736 simply. 738 As noted above, this attribute, when supported, may be requested of 739 absent filesystems without causing NFS4ERR_MOVED to be returned and 740 it should always be available. Servers are strongly urged to 741 support this attribute on all filesystems if they support it on any 742 filesystem. 744 3.2.2. fs_location_info 746 The fs_location_info attribute is intended as a more functional 747 replacement for fs_locations which will continue to exist and be 748 supported. Clients which need the additional information provided 749 by this attribute will interrogate it and get the information from 750 servers that support it. When the server does not support 751 fs_location_info, fs_locations can be used to get a subset of the 752 information. A server which supports fs_location_info MUST support 753 fs_locations as well. 755 There are several sorts of additional information present in 756 fs_location_info, that aren't available in fs_locations: 758 o Attribute continuity information to allow a client to select a 759 location which meets the transparency requirements of the 760 applications accessing the data and to take advantage of 761 optimizations that server guarantees as to attribute 762 continuity may provide (e.g. change attribute). 764 o Filesystem identity information which indicates when multiple 765 replicas, from the clients point of view, correspond to the 766 same target filesystem, allowing them to be used 767 interchangeably, without disruption, as multiple paths to the 768 same thing. 770 o Information which will bear on the suitability of various 771 replicas, depending on the use that the client intends. For 772 example, many applications need an absolutely up-to-date copy 773 (e.g. those that write), while others may only need access to 774 the most up-to-date copy reasonably available. 776 o Server-derived preference information for replicas, which can 777 be used to implement load-balancing while giving the client 778 the entire fs list to be used in case the primary fails. 780 Attribute continuity and filesystem identity information define a 781 number of identity relations among the various filesystem replicas. 782 Most often, the relevant question for the client will be whether a 783 given replica is identical-with/continuous-to the current one in a 784 given respect but the information should be available also as to 785 whether two other replicas match in that respect as well. 787 The way in which such pairwise filesystem comparisons are 788 relatively compactly encoded is to associate with each replica a 789 32-bit integer, the location id. The fs_location_info attribute 790 then contains for each of the identity relations among replicas a 791 32-bit mask. If that mask, when anded with the location ids of the 792 two replicas, result in fields which are identical, then the two 793 replicas are defined as belonging to the corresponding identity 794 relation. This scheme allows the server to accommodate relatively 795 large sets of replicas distinct according to a given criteria 796 without requiring large amounts of data to be sent for each 797 replica. 799 Server-specified preference information is also provided in a 800 fashion that allows a number of different relations (in this case 801 order relations) in a compact way. In this case each 802 location4_server structure contains a 32-bit priority word which 803 can be broken into fields devoted to these relations in any way the 804 server wishes. The location4_info structure contains a set of 805 32-bit masks, one for each relation. Two replicas can be compared 806 via that relation by anding the corresponding mask with the 807 priority word for each replica and comparing the results. 809 The fs_location_info attribute consists of a root pathname (just 810 like fs_locations), together with an array of location4_item 811 structures. 813 struct location4_server { 814 uint32_t priority; 815 uint32_t flags; 816 uint32_t location_id; 817 int32_t currency; 818 utf8str_cis server; 819 }; 821 const LIF_FHR_OPEN = 0x00000001; 822 const LIF_FHR_ALL = 0x00000002; 823 const LIF_MULTI_FS = 0x00000004; 824 const LIF_WRITABLE = 0x00000008; 825 const LIF_CUR_REQ = 0x00000010; 826 const LIF_ABSENT = 0x00000020; 827 const LIF_GOING = 0x00000040; 829 struct location4_item { 830 location4_server entries<>; 831 pathname4 rootpath; 832 }; 834 struct location4_info { 835 pathname4 fs_root; 836 location4_item items<>; 837 uint32_t fileid_keep_mask; 838 uint32_t change_cont_mask; 839 uint32_t same_fh_mask; 840 uint32_t same_state_mask; 841 uint32_t same_fs_mask; 842 uint32_t valid_for; 843 uint32_t read_rank_mask; 844 uint32_t read_order_mask; 845 uint32_t write_rank_mask; 846 uint32_t write_order_mask; 848 }; 850 The fs_location_info attribute is structured similarly to the 851 fs_locations attribute. A top-level structure (fs_locations4 or 852 location4_info) contains the entire attribute including the root 853 pathname of the fs and an array of lower-level structures that 854 define replicas that share a common root path on their respective 855 servers. Those lower-level structures in turn (fs_locations4 or 856 location4_item) contain a specific pathname and information on one 857 or more individual server replicas. For that last lowest-level 858 information, fs_locations has a server name in the form of 859 utf8str_cis, while fs_location_info has a location4_server 860 structure that contains per-server-replica information in addition 861 to the server name. 863 The location4_server structure consists of the following items: 865 o The priority word is used to implement server-specified 866 ordering relations among replicas. These relations are 867 intended to be used to select a replica when migration 868 (including a referral) occurs, when a server appears to be 869 down, when the server directs the client to find a new replica 870 (see LIF_GOING) and, optionally, when a new filesystem is 871 first entered. See the location4_info fields read_order, 872 read_rank, write_order, and write_rank for details of the 873 ordering relations. 875 o A word of flags providing information about this 876 replica/target. These flags are defined below. 878 o An indication of file system up-to-date-ness (currency) in 879 terms of approximate seconds before the present. A negative 880 value indicates that the server is unable to give any 881 reasonably useful value here. A zero indicates that 882 filesystem is the actual writable data or a reliably coherent 883 and fully up-to-date copy. Positive values indicate how out- 884 of-date this copy can normally be before it is considered for 885 update. Such a value is not a guarantee that such updates 886 will always be performed on the required schedule but instead 887 serve as a hint about how far behind the most up-to-date copy 888 of the data, this copy would normally be expected to be. 890 o A location id for the replica, to be used together with masks 891 in the location4_info structure to determine whether that 892 replica matches other in various respects, as described above. 893 See below (after the mask definitions) for an example of how 894 the location_id can be used to communicate filesystem 895 information. 897 When two location id's are identical, then access to the 898 corresponding replicas are defined as identical in all 899 respects. They access the same filesystem with the same 900 filehandles and share v4 file state. Further, multiple 901 connections to the two replicas may be done as part of the 902 same session. Two such replicas will share a common root path 903 and are best presented within two location4_server entries in 904 a common location4_item. These replicas should have identical 905 values for the currency field although the flags and priority 906 fields may be different. 908 Clients may find it helpful to associate all of the 909 location4_server structures that share a location_id value and 910 treat this set as representing a single fs target. When they 911 do so, they should take proper care to note that priority 912 fields for these may be different and the selection of 913 location4_server needs to reflect rank and order 914 considerations (see below) for the individual entries. 916 o The server string. For the case of the replica currently 917 being accessed (via GETATTR), a null string may be used to 918 indicate the current address is using for the RPC call. 920 The flags field has the following bits defined: 922 o LIF_FHR_OPEN indicates that the server will normally make a 923 replacement filehandle available for files that are open at 924 the time of a filesystem image transition. When this flag is 925 associated with an alternative filesystem instance, the client 926 may get the replacement filehandle to be used on the new 927 filesystem instance from the current server. When this flag 928 is associated with the current filesystem instance, a 929 replacement for filehandles from a previous instance may be 930 obtained on this one. See section 3.2.2, fh_replacement, for 931 details. Because of the possibility of hardware and software 932 failures, this is not a guarantee, but when this bit returned, 933 the server should make all reasonable efforts to provide the 934 replacement filehandle. 936 o LIF_FHR_ALL indicates that a replacement filehandle will be 937 made available for all files when there is a migration event 938 or a replica switch. Like LIF_FHR_OPEN, it may indicate 939 replacement availability on the source or the destination, and 940 the details are described in section 3.2.3. 942 o LIF_MULTI_FS indicates that when a transition occurs from the 943 current filesystem instance to this one, the replacement may 944 consist of multiple filesystems. In this case, the client has 945 to be prepared for the possibility that objects on the same fs 946 before migration will be on different ones after. Note that 947 LIF_MULTI_FS is not incompatible with the two filesystems 948 agreeing with respect to the fileid-keep mask since, if one 949 has a set of fileid's that are unique within an fs, each 950 subset assigned to a smaller fs after migration would not have 951 any conflicts internal to that fs. 953 A client, in the case of split filesystem will interrogate 954 existing files with which it has continuing connection (it is 955 free simply forget cached filehandles). If the client 956 remembers the directory filehandle associated with each open 957 file, it may proceed upward using LOOKUPP to find the new fs 958 boundaries. 960 Once the client recognizes that one filesystem has been split 961 into two, it could maintain applications running without 962 disruption by presenting the two filesystems as a single one 963 until a convenient point to recognize the transition, such as 964 a reboot. This would require a mapping of fsid's from the 965 server's fsids to fsids as seen by the client but this already 966 necessary for other reasons anyway. As noted above, existing 967 fileids within the two descendant fs's will not conflict. 968 Creation of new files in the two descendent fs's may require 969 some amount of fileid mapping which can be performed very 970 simply in many important cases. 972 o LIF_WRITABLE indicates that this fs target is writable, 973 allowing it to be selected by clients which may need to write 974 the on this filesystem. When the current filesystem instance 975 in writable, then any other filesystem to which the client 976 might switch must incorporate within its data any committed 977 write made on the current filesystem instance. See below, in 978 the section on the same-fs mask, for issues related to 979 uncommitted writes. While there is no harm in not setting 980 this flag for a filesystem that turns out to be writable, 981 turning the flag on for read-only filesystem can cause 982 problems for clients who select a migration or replication 983 target based on it and then find themselves unable to write. 985 o LIF_VLCACHE indicates that the server is a cached copy where 986 the measured latency of operation may differ very 987 significantly depending on the particular data requested, in 988 that already cached data may be provided with very low latency 989 while other data may require transfer from a distant source. 991 o LIF_CUR_REQ indicates that this replica is the one on which 992 the request is being made. Only a single server entry may 993 have this flag set and in the case of a referral, no entry 994 will have it. 996 o LIF_ABSENT indicates that this entry corresponds an absent 997 filesystem replica. It can only be set if LIF_CUR_REQ is set. 998 When both such bits are set it indicates that a filesystem 999 instance is not usable but that the information in the entry 1000 can be used to determine the sorts of continuity available 1001 when switching from this replica to other possible replicas. 1002 Since this bit can only be true if LIF_CUR_REQ is true, the 1003 value could be determined using the fs_absent attribute but 1004 the information is also made available here for the 1005 convenience of the client. An entry with this bit, since it 1006 represents a true filesystem (albeit absent) does not appear 1007 in the event of a referral, but only where a filesystem has 1008 been accessed at this location and subsequently been migrated. 1010 o LIF_GOING indicates that a replica, while still available, 1011 should not be used further. The client, if using it, should 1012 make an orderly transfer to another filesystem instance as 1013 expeditiously as possible. It is expected that filesystems 1014 going out of service will be announced as LIF_GOING some time 1015 before the actual loss of service and that the valid_for value 1016 will be sufficiently small to allow servers to detect and act 1017 on scheduled events while large enough that the cost of the 1018 requests to fetch the fs_location_info values will not be 1019 excessive. Values on the order of ten minutes seem 1020 reasonable. 1022 The location4_item structure, analogous to an fs_locations4 1023 structure, specifies the root pathname all used by an array of 1024 server replica entries. 1026 The location4_info structure, encoding the fs_location_info 1027 attribute contains the following: 1029 o The fs_root field which contains the pathname of the root of 1030 the current filesystem on the current server, just as it does 1031 the fs_locations4 structure. 1033 o An array of location4_item structures, which contain 1034 information about replicas of the current filesystem. Where 1035 the current filesystem is actually present, or has been 1036 present, i.e. this is not a referral situation, one of the 1037 location4_item structure will contain a location4_server for 1038 the current server. This structure will have LIF_ABSENT set 1039 if the current filesystem is absent, i.e. normal access to it 1040 will return NFS4ERR_MOVED. 1042 o The fileid-keep mask indicates, in combination with the 1043 appropriate location ids, that fileids will not change (i.e. 1044 they will be reliably maintained with no lack of continuity) 1045 across a transition between the two filesystem instances, 1046 whether by migration or a replica transition. This allows 1047 transition to safely occur without any chance that 1048 applications that depend on fileids will be impacted. 1050 o The change-cont mask indicates, in combination with the 1051 appropriate location ids, that the change attribute is 1052 continuous across a migration event between the server within 1053 any pair of replicas. In other words if the change attribute 1054 has a given value before the migration event, then it will 1055 have that same value after, unless there has been an 1056 intervening change to the file. This information is useful 1057 after a migration event, in avoiding any need to refetch 1058 change information or any requirement to needlessly flush 1059 cached data because of a lack of reliable change information. 1060 Although change attribute continuity allows the client to 1061 dispense with any migration-specific refetching of change 1062 attributes, it still must fetch the attribute in all cases in 1063 which would normally do so if there had been no migration. In 1064 particular, when an open-reclaim is not available and the file 1065 is re-opened, a check for an unexpected change in the change 1066 attribute must be done. 1068 o The same-fh mask indicates, in combination with the 1069 appropriate location ids, whether two replicas will have the 1070 same fh's for corresponding objects. When this is true, both 1071 filesystems must have the same filehandle expiration type. 1072 When this is true and that type is persistent, those 1073 filehandles may be used across a migration event, without 1074 disruption. 1076 o The same-state mask indicates, in combination with the 1077 appropriate location ids, whether two replicas will have the 1078 same state environment. This does not necessarily mean that 1079 when performing migration, the client will not have to reclaim 1080 state. However it does mean that the client may proceed using 1081 his current clientid just as if there were no migration event 1082 and only reclaim state when an NFS4ERR_STALE_CLIENTID or 1083 NFS4ERR_STALE_STATEID error is received. 1085 Filesystems marked as having the same state should also have 1086 same filehandles. In other words the same-fh mask should be a 1087 subset (not necessarily proper) of the same-state mask. 1089 o The same-fs mask indicates, in combination with the 1090 appropriate location ids, whether two replicas in fact 1091 designate the same filesystem in all respects. If so, any 1092 action taken on one is immediately on the other and the client 1093 can consider them as effectively the same thing. 1095 The same-fs mask must include all bits in the same-fh mask, 1096 the change-cont mask, and same-state mask. Thus, filesystem 1097 instances marked as same-fs must also share state, have the 1098 same filehandles, and be change continuous. These 1099 considerations imply that a transition can occur with no 1100 application disruption and no significant client work to 1101 update state related to the filesystem. 1103 When the same-fs mask indicates two filesystems are the same 1104 the clients are entitled to assume that there will also be no 1105 significant delay for the server to re-establish its state to 1106 effectively support the client. Where same-fs is not true and 1107 the other constituent continuity indication are true (fileid- 1108 keep, change-cont, same-fh), there may be significant delay 1109 under some circumstances, in line with the fact that the 1110 filesystems are being represented as being carefully kept in 1111 complete synchronization yet they are not the same. 1113 When two filesystems on separate servers have location ids 1114 which match on all the bits within the same-fs mask, clients 1115 should present the same nfs_client_id to both with the 1116 expectation the servers may be able to generate a shared 1117 clientid to be used when communicating with either. Such 1118 servers are expected to co-ordinate at least to the degree 1119 that they will not provide the same clientid to a client while 1120 not actually sharing the underlying state data. 1122 In handling of uncommitted writes, two servers with any pair 1123 of filesystems having the same-fs relation, write verifiers 1124 must be sufficiently unique that a client switching between 1125 the servers can determine whether previous async writes need 1126 to be reissued. This is unlike the general case of 1127 filesystems not bearing this relation, in which it must be 1128 assumed that asynchronous writes will be lost across a 1129 filesystem transition. 1131 When two replicas' location ids, match on all the bits within 1132 the same-fs mask, but are not identical, the client using 1133 sessions will establish separate sessions to each which 1134 together share any such common clientid. 1136 o The valid_for field specifies a time for which it is 1137 reasonable for a client to use the fs_location_info attribute 1138 without refetch. The valid_for value does not provide a 1139 guarantee of validity since servers can unexpectedly go out of 1140 service or become inaccessible for any number of reasons. 1141 Clients are well-advised to refetch this information for 1142 actively accessed filesystem at every valid_for seconds. This 1143 is particularly important when filesystem replicas may go out 1144 of service in a controlled way using the LIF_GOING flag to 1145 communicate an ongoing change. The server should set 1146 valid_for to a value which allows well-behaved clients to 1147 notice the LIF_GOING flag and make an orderly switch before 1148 the loss of service becomes effective. If this value is zero, 1149 then no refetch interval is appropriate and the client need 1150 not refetch this data on any particular schedule. 1152 In the event of a transition to a new filesystem instance, a 1153 new value of the fs_location_info attribute will be fetched at 1154 the destination and it is to be expected that this may have a 1155 different valid_for value, which the client should then use, 1156 in the same fashion as the previous value. 1158 o The read-rank, read-order, write-rank, and write-order masks 1159 are used, together with the priority words of various replicas 1160 to order the replicas according to the server's preference. 1161 See the discussion below for the interaction of rank, order, 1162 and the client's own preferences and needs. Read-rank and 1163 read-order are used to direct clients which only need read 1164 access while write-rank and write-order are used to direct 1165 clients that require some degree of write access to the 1166 filesystem. 1168 Depending on the potential need for write access by a given client, 1169 one of the pairs of rank and order masks is used, together with 1170 priority words, to determine a rank and an order for each instance 1171 under consideration. The read rank and order should only be used 1172 if the client knows that only reading will ever be done or if it is 1173 prepared to switch to a different replica in the event that any 1174 write access capability is required in the future. The rank is 1175 obtained by anding the selected rank mask with the priority and the 1176 order is obtained similarly by anding the selected order mask with 1177 the priority. The resulting rank and order are compared as 1178 described below with lower always being better (more preferred). 1180 Rank is used to express a strict server-imposed ordering on 1181 clients, with lower values indicating "more preferred." Clients 1182 should attempt to use all replicas with a given rank before they 1183 use one with a higher rank. Only if all of those servers are 1184 unavailable should the client proceed to servers of a higher rank. 1186 Within a rank, the order value is used to specify the server's 1187 preference to guide the client's selection when the client's own 1188 preferences are not controlling, with lower values of order 1189 indicating "more preferred." If replicas are approximately equal 1190 in all respects, clients should defer to the order specified by the 1191 server. When clients look at server latency as part of their 1192 selection, they are free to use this criterion but it is suggested 1193 that when latency differences are not significant, the server- 1194 specified order should guide selection. 1196 The server may configure the rank and order masks to considerably 1197 simplify the decisions if it so chooses. For example, if read vs. 1198 write is not to be important in the selection process, then the 1199 location4_info should be one in which the read-rank and write-rank 1200 mask, and the read-order and write-order mask are equal. If the 1201 server wishes to totally direct the process via rank, leaving no 1202 room for client choice, it may simply set the write-order mask and 1203 the read-order mask to zero. Conversely, if it wishes to give 1204 general preferences with more scope for client choice, it may set 1205 the read-rank mask and the write-rank mask to zero. A server may 1206 even set all the masks to zero and allow the client to make its own 1207 choices. The protocol allows multiple policies to be used as found 1208 appropriate. 1210 The use of location id together with the masks in location4_info 1211 structure can be illustrated by an example. 1213 Suppose one has the following sets of servers: 1215 o Server A with four IP addresses A1 through A4. 1217 o Servers B, C, D sharing a cluster filesystem with A and each 1218 having four IP addresses, B1, B2, ... D3, D4. 1220 o A point-in-time copy of the filesystem created using image 1221 copy which shares filehandles and is change-attribute 1222 continuous with the filesystem on A-D and has two IP address 1223 X1 and X2. 1225 o A point-in-time-copy of the filesystem which was created at a 1226 higher level but shares fileid's with the one on A-D but is 1227 accessed (via a clustered filesystem) by servers Ya and Yb. 1229 o A copy of the of the filesystem made by simple user-level copy 1230 tools and which is served from server Z. 1232 Given the above, one way of presenting these relationships is to 1233 assign the following location id's: 1235 o A1-4 would get 0x1111 1237 o B1-4 would get 0x1112 1239 o C1-4 would get 0x1113 1241 o D1-4 would get 0x1114 1242 o X1-2 would get 0x1125 1244 o Ya would get 0x1236 1246 o Yb would get 0x1237 1248 o Z would get 0x2348 1250 And then the following mask values would be used: 1252 o The same-fs and same-state masks would all be 0xfff0. 1254 o The same-fh and change-cont mask would be 0xff00. 1256 o The keep-fileid mask would be 0xf00 1258 This scheme allows the number of bits devoted to various kinds of 1259 similarity classes to be adjusted as needed with no change to the 1260 protocol. The total of thirty-two bits is expected to suffice 1261 indefinitely. 1263 As noted above, the fs_location_info attribute, when supported, may 1264 be requested of absent filesystems without causing NFS4ERR_MOVED to 1265 be returned and it is generally expected that will be available for 1266 both present and absent filesystems even if only a single 1267 location_server entry is present, designating the current (present) 1268 filesystem, or two location_server entries designating the current 1269 (and now previous) location of an absent filesystem and its 1270 successor location. Servers are strongly urged to support this 1271 attribute on all filesystems if they support it on any filesystem. 1273 3.2.3. fh_replacement 1275 The fh_replacement attribute provides a way of providing a 1276 substitute filehandle to be used on a target server when a 1277 migration event or other fs instance switching event occurs. This 1278 provides an alternative to maintaining access via the existing 1279 persistent filehandle (which may be difficult) or using volatile 1280 filehandles (which will not give the correct result in all cases). 1282 When a migration event occurs, information on the new location (or 1283 location choices) will be available via the fs_location_info 1284 attribute applied to any filehandle within the source filesystem. 1285 When LIF_FHR_OPEN or LIF_FHR_ALL is present, the fh_replacement 1286 attribute may be used to get the corresponding filehandle for 1287 filehandles that the client has accessed. 1289 Similarly, after such an event, when the fs_location_info attribute 1290 is fetched on the new server, LIF_FHR_OPEN or LIF_FHR_ALL may be 1291 present in the server entry corresponding to the current filesystem 1292 instance. In this case, the fh_replacement attribute can be used 1293 to get the new filehandles corresponding to each of the now 1294 outdated filehandles on the previous instance. In either of these 1295 ways, the client may be assured of a consistent mapping from old to 1296 new filehandles without relying on a purely name-based mapping, 1297 which in some cases will not be correct. 1299 The choice of providing replacement on the source filesystem 1300 instance or the target will normally be based on which server has 1301 the proper mapping. Generally when the image is created by a push 1302 from the source, the source server naturally has the appropriate 1303 filehandles corresponding to its files and can provide them to the 1304 client. When the image transfer is done via pull, the target 1305 server will be aware of the source filehandles and can provide the 1306 appropriate mapping when the client requests it. Note that the 1307 target server can only provide replacement filehandles if it can 1308 assure filehandle uniqueness, i.e. that filehandles from the 1309 source do not conflict with valid filehandles on the destination 1310 server. In the case where such uniqueness can be assured, source 1311 filehandles can be accepted for the purpose of providing 1312 replacements with NFS4ERR_FHEXPIRED returned for any use other than 1313 interrogation of the fh_replacement attribute via GETATTR. 1315 Multiple fh replacement on different migration targets may be 1316 provided via multiple fhrep4 entries. Each fhrep4_entry provides a 1317 replacement filehandle applying to all targets whose location id, 1318 when anded with the fh-same mask (from the fs_location_info 1319 attribute) matches the location_set value in the fhrep4_entry. 1320 This set of replicas share the same filehandle and thus can a 1321 single entry can provide replacement filehandles for all of the 1322 members. Note that the location_set value will only match that of 1323 the current filesystem instance, when the client presents a 1324 filehandle from the previous filesystem instance and the target 1325 filesystem provides its own replacement filehandles. 1327 union fhrep4_entry switch (bool present) { 1328 uint32_t location_set; 1329 nfs_fh4 replacement; 1330 }; 1332 struct fh4_replacement { 1333 fhrep4_entry entries<>; 1334 }; 1336 When a filesystem becomes absent, the server in responding to 1337 requests for the fh_replacement attribute is not required to 1338 validate all fields of the filehandle if it does not maintain per- 1339 file information. This matches current handle of fs_locations (and 1340 applies as well to fs_location_info). For example, if a server has 1341 an fsid field within its filehandle implementation, it may simply 1342 recognize that value and return filehandles with the corresponding 1343 new fsid without validating other information within the handle. 1344 This can result in filesystem accepting a filehandle, which under 1345 other circumstances might result in NFS4ERR_STALE, just as it can 1346 when interrogating the fs_locations or fs_location_info attributes. 1347 Note that when it does so, it will return a replacement which, when 1348 presented to the new filesystem, will get an NFS4ERR_STALE there. 1350 Use of the fh_replacement attribute can allow wholesale change of 1351 filehandles to implement storage re-organization even within the 1352 context of a single server. If NFS4ERR_MOVED is returned, the 1353 client will fetch fs_location_info which may refer to a location on 1354 the original server. Use of fh_replacement in this context allows 1355 a new set of filehandles to be established as part of storage 1356 reconfiguration (including possibly a split into multiple fs's) 1357 without requiring the client to maintain name information against 1358 the possibility of such a reconfiguration (for volatile 1359 filehandles). 1361 Servers are not required to maintain the availability of 1362 replacement filehandles for any particular length of time, but in 1363 order to maintain continuity of access in the face of network 1364 disruptions, servers should generally maintain the mapping from the 1365 pre-replacement file handles persistently across server reboots, 1366 and for a considerable time. It should be the case that even under 1367 severe network disruption, any client that received pre-replacement 1368 filehandles is given an opportunity to obtain the replacements. 1369 When this mapping no longer made available, the pre-replacement 1370 filehandles should not be re-used, just as is the case for any 1371 other superseded file handle. 1373 As noted above, this attribute, when supported, may be requested of 1374 absent filesystems without causing NFS4ERR_MOVED to be returned, 1375 and it should always be available. When it is requested and the 1376 attribute is supported, if no replacement file handle information 1377 is present, either because the filesystem is still present and 1378 there is no migration event or because there are currently no 1379 replacement filehandles available, a zero-length array of 1380 fhrep4_entry structures should be returned. 1382 3.2.4. fs_status 1384 In an environment in which multiple copies of the same basic set of 1385 data are available, information regarding the particular source of 1386 such data and the relationships among different copies, can be very 1387 helpful in providing consistent data to applications. 1389 enum status4_type { 1390 STATUS4_FIXED = 1, 1391 STATUS4_UPDATED = 2, 1392 STATUS4_INTERLOCKED = 3, 1393 STATUS4_WRITABLE = 4, 1394 STATUS4_ABSENT = 5 1395 }; 1397 struct fs4_status { 1398 status4_type type; 1399 utf8str_cs source; 1400 utf8str_cs current; 1401 nfstime4 version; 1402 }; 1404 The type value indicates the kind of filesystem image represented. 1405 This is of particular importance when using the version values to 1406 determine appropriate succession of filesystem images. Five types 1407 are distinguished: 1409 o STATUS4_FIXED which indicates a read-only image in the sense 1410 that it will never change. The possibility is allowed that as 1411 a result of migration or switch to a different image, changed 1412 data can be accessed but within the confines of this instance, 1413 no change is allowed. The client can use this fact to 1414 aggressively cache. 1416 o STATUS4_UPDATED which indicates an image that cannot be 1417 updated by the user writing to it but may be changed 1418 exogenously, typically because it is a periodically updated 1419 copy of another writable filesystem somewhere else. 1421 o STATUS4_VERSIONED which indicates that the image, like the 1422 STATUS4_UPDATED case, is updated exogenously, but it provides 1423 a guarantee that the server will carefully update the 1424 associated version value so that the client, may if it 1425 chooses, protect itself from a situation in which it reads 1426 data from one version of the filesystem, and then later reads 1427 data from an earlier version of the same filesystem. See 1428 below for a discussion of how this can be done. 1430 o STATUS4_WRITABLE which indicates that the filesystem is an 1431 actual writable one. The client need not of course actually 1432 write to the filesystem, but once it does, it should not 1433 accept a transition to anything other than a writable instance 1434 of that same filesystem. 1436 o STATUS4_ABSENT which indicates that the information is the 1437 last valid for a filesystem which is no longer present. 1439 The opaque strings source and current provide a way of presenting 1440 information about the source of the filesystem image being present. 1441 It is not intended that client do anything with this information 1442 other than make it available to administrative tools. It is 1443 intended that this information be helpful when researching possible 1444 problems with a filesystem image that might arise when it is 1445 unclear if the correct image is being accessed and if not, how that 1446 image came to be made. This kind of debugging information will be 1447 helpful, if, as seems likely, copies of filesystems are made in 1448 many different ways (e.g. simple user-level copies, filesystem- 1449 level point-in-time copies, cloning of the underlying storage), 1450 under a variety of administrative arrangements. In such 1451 environments, determining how a given set of data was constructed 1452 can be very helpful in resolving problems. 1454 The opaque string 'source' is used to indicate the source of a 1455 given filesystem with the expectation that tools capable of 1456 creating a filesystem image propagate this information, when that 1457 is possible. It is understood that this may not always be possible 1458 since a user-level copy may be thought of as creating a new data 1459 set and the tools used may have no mechanism to propagate this 1460 data. When a filesystem is initially created associating with it 1461 data regarding how the filesystem was created, where it was 1462 created, by whom, etc. can be put in this attribute in a human- 1463 readable string form so that it will be available when propagated 1464 to subsequent copies of this data. 1466 The opaque string 'current' should provide whatever information is 1467 available about the source of the current copy. Such information 1468 as the tool creating it, any relevant parameters to that tool, the 1469 time at which the copy was done, the user making the change, the 1470 server on which the change was made etc. All information should be 1471 in a human-readable string form. 1473 The version field provides a version identification, in the form of 1474 a time value, such that successive versions always have later time 1475 values. When the filesystem type is anything other than 1476 STATUS4_VERSIONED, the server may provide such a value but there is 1477 no guarantee as to its validity and clients will not use it except 1478 to provide additional information to add to 'source' and 'current'. 1480 When the type is STATUS4_VERSIONED, servers should provide a value 1481 of version which progresses monotonically whenever any new version 1482 of the data is established. This allows the client, if reliable 1483 image progression is important to it, to fetch this attribute as 1484 part of each COMPOUND where data or metadata from the filesystem is 1485 used. 1487 When it is important to the client to make sure that only valid 1488 successor images are accepted, it must make sure that it does not 1489 read data or metadata from the filesystem without updating its 1490 sense of the current state of the image, to avoid the possibility 1491 that the fs_status which the client holds will be one for an 1492 earlier image, and so accept a new filesystem instance which is 1493 later than that but still earlier than updated data read by the 1494 client. 1496 In order to do this reliably, it must do a GETATTR of fs_status 1497 that follows any interrogation of data or metadata within the 1498 filesystem in question. Often this is most conveniently done by 1499 appending such a GETATTR after all other operations that reference 1500 a given filesystem. When errors occur between reading filesystem 1501 data and performing such a GETATTR, care must be exercised to make 1502 sure that the data in question is not used before obtaining the 1503 proper fs_status value. In this connection, when an OPEN is done 1504 within such a versioned filesystem and the associated GETATTR of 1505 fs_status is not successfully completed, the open file in question 1506 must not be accessed until that fs_status is fetched. 1508 The procedure above will ensure that before using any data from the 1509 filesystem the client has in hand a newly-fetched current version 1510 of the filesystem image. Multiple values for multiple requests in 1511 flight can be resolved by assembling them into the required partial 1512 order (and the elements should form a total order within it) and 1513 using the last. The client may then, when switching among 1514 filesystem instances, decline to use an instance which is not of 1515 type STATUS4_VERSIONED or whose version field is earlier than the 1516 last one obtained from the predecessor filesystem instance. 1518 4. Migration Protocol 1520 As discussed above, it has always been anticipated that a migration 1521 protocol would be developed, to address the issue of migration of a 1522 filesystem between different filesystem implementations. This need 1523 remains, and it can be expected that as client implementations of 1524 migration become more common, it will become more pressing and the 1525 working group needs to seriously consider how that need may be best 1526 addressed. 1528 We are going to suggest that the working group should seriously 1529 consider what may be a significantly lighter-weight alternative, 1530 the addition of features to support server-to-server migration 1531 within NFSv4 itself, but taking advantage of existing NFSv4 1532 facilities and only adding the features needed to support efficient 1533 migration, as items within a minor version. 1535 One thing that needs to be made clear is that a common migration 1536 protocol does not mean a common migration approach or common 1537 migration functionality. Thus the need for the kinds of 1538 information provided by fs_location_info. For example, the fact 1539 that the migration protocol will make available on the target the 1540 file id, file handle, and change attribute from the source, does 1541 not means that the receiving can store these values natively, or 1542 that it will choose to implement translation support to accommodate 1543 the values exported by the source. This will remain an 1544 implementation choice. Clients will need information about those 1545 various choices, such as would be provided by fs_location_info, in 1546 order to deal with the various implementations. 1548 4.1. NFSv4.x as a Migration Protocol 1550 Whether the following approach or any other is adopted, 1551 considerable work will still be required to flesh out the details, 1552 requiring a number of drafts for a problem statement, initial 1553 protocol spec, etc. But to give an idea of what would be involved 1554 in this kind of approach, a rough sketch is given below. 1556 First, let us fix for the moment on a pull model, in which the 1557 target server, selected by a management application pulls data from 1558 the source using NFSv4.x. The server acts as a client, albeit a 1559 specially privileged one, to copy the existing data. 1561 The first point to be made is that using NFSv4 means that we have a 1562 representation for all data that is representable within NFSv4 and 1563 that that is maintained automatically as minor versioning proceeds. 1564 That is, when attributes are added to a minor version of NFSv4, 1565 they are "automatically" added to the migration copy protocol, 1566 because the two are the same. 1568 The presence of COMPOUND is a further help in that implementations 1569 will be able to maintain high throughput when copying without 1570 creating a special protocol devoted to that purpose. For example, 1571 when copying a large set of small files, these files can all be 1572 read with a single COMPOUND. This means that the benefit of 1573 creating a stream format for the entire fs is much reduced and 1574 allows existing servers (with small modifications) to simply 1575 support the kinds of access they have to support anyway. The 1576 servers acting as clients would probably use a non-standard 1577 implementation but they would share lots of infrastructure with 1578 more standard clients, so this would probably be a win on the 1579 implementation side as well as on the specification side. 1581 One other point is that if the migration protocol were in fact an 1582 NFSv4.x, NFSv4 developments such as pNFS would be available for 1583 high-performance migration, with no special effort. 1585 Clearly, there is still considerable work to do this, even if it is 1586 not of the same order as a new protocol. The working group needs 1587 to discuss this and see if there is agreement that a means of 1588 cross-server migration is worthwhile and whether this is the best 1589 way to get there. 1591 Here is a basic list of things that would have to be dealt with to 1592 effect a transfer: 1594 o Reads without changing access times. This is probably best 1595 done as a per-session attribute (it is best to assume sessions 1596 here). 1598 o Reads that ignore share reservations and mandatory locks. It 1599 may be that the existing all-ones special stateid is adequate. 1601 o A way to obtain the locking state information for the source 1602 fs: the locks (byte-range and share reservations) for that fs 1603 including associated stateids and owner opaque strings, 1604 clientid's and the other identifying client information for 1605 all clients with locks on that fs. This is all protocol- 1606 defined, rather than implementation-specific data. 1608 o A way to lock out changes on a filesystem. This would be 1609 similar to a read delegation on the entire filesystem, but 1610 would have a greater degree of privilege, in that the holder 1611 would be allowed to keep it as long as his lease was renewed. 1613 o A way to permanently terminate existing access to the 1614 filesystem (by everyone except the calling session) and report 1615 it MOVED to the users. 1617 Conventions as far as appropriate security for such operations 1618 would have to be developed to assure interoperability, but it is a 1619 question of establishing conventions rather than defining new 1620 mechanisms. 1622 Given the facilities above, you could get an initial image of a 1623 filesystem, and then rescan and update the destination until the 1624 amount of change to be propagated stabilized. At this point, 1625 changes could be locked out and a final set up updates propagated 1626 while read-only access to the filesystem continued. At that point 1627 further access would be locked out, and the locking state and any 1628 final changes to access time would be propagated. The access time 1629 scan would be manageable since the client could issue long 1630 COMPOUND's with many PUTFH-GETATTR pairs and many such requests 1631 could be in flight at a time. 1633 If it was required that the disruption to access be smaller, some 1634 small additions to the functionality might be quite effective: 1636 o Notifications for a filesystem, perhaps building on the 1637 notifications proposed in the directory delegations document 1638 would limit the rescanning for changes, and so would make the 1639 window in which additional changes could happen much smaller. 1640 This would greatly reduce the window in which write access 1641 would have to be locked out. 1643 o A facility for global scans for attribute changes could help 1644 reduce lockout periods. Something that gave a list of object 1645 filehandles that met a given attribute search criterion (e.g. 1646 attribute x greater than, less than, equal to, some value) 1647 could reduce rescan update times and also rescan times for 1648 accesstime updates. 1650 These lists assume that the server initiating the transfer is doing 1651 its own writing to disk. Extending this to writing the new fs via 1652 NFSv4 would require further protocol support. The basic message 1653 for the working group is that the set of things to do is of 1654 moderate size and builds in large part on existing or already 1655 proposed facilities. 1657 Acknowledgements 1659 The authors wish to thank Ted Anderson and Jon Haswell for their 1660 contributions to the ideas within this document. 1662 Normative References 1664 [RFC3530] 1665 S. Shepler, et. al., "NFS Version 4 Protocol", Standards Track 1666 RFC 1668 Informative References 1670 [referrals] 1671 D. Noveck, R. Burnett, "Implementation Guide for Referrals in 1672 NFSv4", Internet Draft draft-ietf-nfsv4-referrals-00.txt, Work 1673 in progress 1675 Authors' Addresses 1677 David Noveck 1678 Network Appliance, Inc. 1679 375 Totten Pond Road 1680 Waltham, MA 02451 USA 1682 Phone: +1 781 768 5347 1683 EMail: dnoveck@netapp.com 1685 Rodney C. Burnett 1686 IBM, Inc. 1687 13001 Trailwood Rd 1688 Austin, TX 78727 USA 1690 Phone: +1 512 838 8498 1691 EMail: cburnett@us.ibm.com 1693 Full Copyright Statement 1695 Copyright (C) The Internet Society (2005). This document is 1696 subject to the rights, licenses and restrictions contained in BCP 1697 78, and except as set forth therein, the authors retain all their 1698 rights. 1700 This document and the information contained herein are provided on 1701 an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 1702 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND 1703 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, 1704 EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT 1705 THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR 1706 ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A 1707 PARTICULAR PURPOSE. 1709 Intellectual Property 1711 The IETF takes no position regarding the validity or scope of any 1712 Intellectual Property Rights or other rights that might be claimed 1713 to pertain to the implementation or use of the technology described 1714 in this document or the extent to which any license under such 1715 rights might or might not be available; nor does it represent that 1716 it has made any independent effort to identify any such rights. 1717 Information on the procedures with respect to rights in RFC 1718 documents can be found in BCP 78 and BCP 79. 1720 Copies of IPR disclosures made to the IETF Secretariat and any 1721 assurances of licenses to be made available, or the result of an 1722 attempt made to obtain a general license or permission for the use 1723 of such proprietary rights by implementers or users of this 1724 specification can be obtained from the IETF on-line IPR repository 1725 at http://www.ietf.org/ipr. 1727 The IETF invites any interested party to bring to its attention any 1728 copyrights, patents or patent applications, or other proprietary 1729 rights that may cover technology that may be required to implement 1730 this standard. Please address the information to the IETF at ietf- 1731 ipr@ietf.org. 1733 Acknowledgement 1735 Funding for the RFC Editor function is currently provided by the 1736 Internet Society.