Network Working Group S. Shepler Internet Draft August 1998 Document: draft-shepler-nfsv4-02.txt NFS version 4 Strawman Status of this Memo This document is an Internet-Draft. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as "work in progress." To view the entire list of current Internet-Drafts, please check the "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net (Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). Abstract NFS version 4 is meant to be a further revision of the NFS protocol defined already by versions 2 and 3. It retains the essential characteristics of previous versions: stateless design for easy recovery, independent of transport protocols, operating systems and filesystems, simplicity, and good performance. This strawman is being offered as a starting point for future discussions and work on NFS version 4. The document contains ideas presented and discussed via email at nfsv4-wg@sunroof.eng.sun.com. Additional content has been added in areas with the intent of offering more suggestions for future discussion. Goals for NFS version 4 include: strong security, access and good performance via the Internet, cross-platform interoperability, and protocol extensibility. Expires: February 1999 [Page 1] Strawman NFS version 4 August 1998 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 2. RPC and Security Flavor . . . . . . . . . . . . . . . . . . 5 2.1. Ports and Transports . . . . . . . . . . . . . . . . . . . 5 2.2. Security Flavors . . . . . . . . . . . . . . . . . . . . . 5 2.2.1. Security mechanisms for NFS version 4 . . . . . . . . . 5 2.3. Security Negotiation . . . . . . . . . . . . . . . . . . . 6 2.3.1. Security Error . . . . . . . . . . . . . . . . . . . . . 6 2.3.2. SECINFO . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4. Alternate Negotiation Technique - SPNEGO . . . . . . . . . 6 3. File handles . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1. Obtaining the first file handle . . . . . . . . . . . . . 7 3.2. The persistent and volatile file handle . . . . . . . . . 7 4. Basic Data Types . . . . . . . . . . . . . . . . . . . . . . 9 5. File Attributes . . . . . . . . . . . . . . . . . . . . . 11 5.1. Defining Attributes . . . . . . . . . . . . . . . . . . 12 5.2. File Attribute Bits . . . . . . . . . . . . . . . . . . 12 6. Defined Error Numbers . . . . . . . . . . . . . . . . . . 20 7. Compound Requests . . . . . . . . . . . . . . . . . . . . 24 8. NFS Version 4 Requests . . . . . . . . . . . . . . . . . . 25 8.1. Evaluation of a Compound Request . . . . . . . . . . . . 25 9. NFS Version 4 Procedures . . . . . . . . . . . . . . . . . 26 9.1. Procedure 0: NULL - No operation . . . . . . . . . . . . 27 9.2. Procedure 1: ACCESS - Check Access Permission . . . . . 28 9.3. Procedure 2: COMMIT - Commit cached data . . . . . . . . 31 9.4. Procedure 3: CREATE - Create a filesystem object . . . . 34 9.5. Procedure 4: GETATTR - Get attributes . . . . . . . . . 38 9.6. Procedure 5: GETFH - Get current filehandle . . . . . . 39 9.7. Procedure 6: LINK - Create link to an object . . . . . . 40 9.8. Procedure 7: LOCKR - Create a read lock . . . . . . . . 42 9.9. Procedure 8: LOCKW - Create write lock . . . . . . . . . 44 9.10. Procedure 9: LOCKT - test for lock . . . . . . . . . . 46 9.11. Procedure 10: LOCKX - validate and extend lock . . . . 47 9.12. Procedure 11: LOCKU - Unlock file . . . . . . . . . . . 49 9.13. Procedure 12: LOOKUP - Lookup filename . . . . . . . . 50 9.14. Procedure 13: LOOKUPP - Lookup parent directory . . . . 52 9.15. Procedure 14: NVERIFY - Verify attributes different . . 53 9.16. Procedure 15: RESTOREFH - Restore saved filehandle . . 54 9.17. Procedure 16: SAVEFH - Save current filehandle . . . . 55 9.18. Procedure 17: PUTFH - Set current filehandle . . . . . 56 9.19. Procedure 18: PUTROOTFH - Set root filehandle . . . . . 57 9.20. Procedure 19: READ - Read from file . . . . . . . . . . 58 9.21. Procedure 20: READDIR - Read directory . . . . . . . . 60 9.22. Procedure 21: READLINK - Read symbolic link . . . . . . 63 9.23. Procedure 22: REMOVE - Remove filesystem object . . . . 65 9.24. Procedure 23: RENAME - Rename directory entry . . . . . 67 9.25. Procedure 24: SETATTR - Set attributes . . . . . . . . 69 Expires: February 1999 [Page 2] Strawman NFS version 4 August 1998 9.26. Procedure 25: VERIFY - Verify attributes same . . . . . 71 9.27. Procedure 26: WRITE - Write to file . . . . . . . . . . 72 9.28. Procedure 27: SECINFO - Obtain Available Security . . . 76 10. Locking notes . . . . . . . . . . . . . . . . . . . . . . 78 10.1. Short and long leases . . . . . . . . . . . . . . . . . 78 10.2. Clocks and leases . . . . . . . . . . . . . . . . . . . 78 10.3. Locks and lease times . . . . . . . . . . . . . . . . . 79 10.4. Lease scalability . . . . . . . . . . . . . . . . . . . 79 10.5. Rejecting write locks and denial of service . . . . . . 79 10.6. Locking of directories and other meta-files . . . . . . 79 10.7. Proxy servers and leases . . . . . . . . . . . . . . . 79 10.8. Archive updates and lease time adjustment . . . . . . . 79 10.9. Locking and the new latency . . . . . . . . . . . . . . 80 11. NFS Version 4 RPC definition file . . . . . . . . . . . . 81 12. Bibliography . . . . . . . . . . . . . . . . . . . . . . 99 13. Author's Address . . . . . . . . . . . . . . . . . . . . 102 Expires: February 1999 [Page 3] Strawman NFS version 4 August 1998 1. Introduction NFS version 4 is a further revision of the NFS protocol defined already by versions 2 [RFC1094] and 3 [RFC1813]. It retains the essential characteristics of previous versions: stateless design for easy recovery, independent of transport protocols, operating systems and filesystems, simplicity, and good performance. The NFS version 4 revision has the following goals: o Improved access and good performance on the Internet. The protocol is designed to transit firewalls easily, perform well where latency is high and bandwidth is low, and scale to very large numbers of clients per server. o Strong security with negotiation built into the protocol. The protocol builds on the work of the ONCRPC working group in supporting the RPCSEC_GSS protocol. Additionally NFS version 4 provides a mechanism to allow clients and servers to negotiate security and require clients and servers to support a minimal set of security schemes. o Good cross-platform interoperability. The protocol features a filesystem model that provides a useful, common set of features that does not unduly favor one filesystem or operating system over another. o Designed for protocol extensions. The protocol is designed to accept standard extensions that do not compromise backward compatibility. Expires: February 1999 [Page 4] Strawman NFS version 4 August 1998 2. RPC and Security Flavor The NFS version 4 protocol will use the Remote Procedure Call (RPC) version 2 and corresponding eXternal Data Representation (XDR) as defined in [RFC1831] and [RFC1832]. The RPCSEC_GSS security flavor as defined in [RFC2203] will be used as the mechanism to deliver stronger security to NFS version 4. 2.1. Ports and Transports Historically, NFS version 2 and version 3 servers have resided on UDP/TCP port 2049. Port 2049 is a IANA registered port number for NFS and therefore will continue to be used for NFS version 4. The NFS server should use port 2049 as a means to ease the use of NFS through firewalls. This means that for NFS version 4 services the client will not need to use the RPC binding protocols as described in [RFC1833]. The NFS server, at a minimum, must offer its RPC service via the TCP transport. The use of UDP for RPC service offering should also be present if applicable. The NFS client should have a preference for TCP usage but should supply a mechanism to override TCP in favor of UDP as the RPC transport. 2.2. Security Flavors Traditional RPC implementations have included AUTH_NONE, AUTH_SYS, AUTH_DH, and AUTH_KRB4 as security flavors. With [RFC2203] an additional security flavor of RPCSEC_GSS has been introduced which uses the functionality of GSS_API [RFC2078]. This allows for the use of varying security mechanisms by the RPC layer without the additional implementation overhead of adding RPC security flavors. 2.2.1. Security mechanisms for NFS version 4 As a goal of the NFS version 4 work, adding stronger security to the protocol definition is required. The use of RPCSEC_GSS will require selection of: mechanism, quality of protection, and service (authentication, integrity, privacy). The remainder of this document will refer to these three parameters of the RPCSEC_GSS security as the security triple. NOTE: Kerberos-V5 has been suggested as one of the security mechanisms. Another mechanism should be chosen and should be a public key based system so as to complement the Kerberos-V5 selection. Expires: February 1999 [Page 5] Strawman NFS version 4 August 1998 2.3. Security Negotiation With the NFS version 4 server potentially offering multiple security mechanisms, the client will need a way to determine or negotiate which mechanism is to be used for its communication with the server. The NFS server may have multiple points within its file system name space that are available for use by NFS clients. In turn the NFS server may be configured such that each of these entry points may have different or multiple security mechanisms in use. The security negotiation between client and server must be done with a secure channel to eliminate the possibility of a third party intercepting the negotiation sequence and forcing the client and server to choose a lower level of security than required/desired. 2.3.1. Security Error Based on the assumption that each NFS version 4 client and server must support a minimum set of security (i.e. Kerberos-V5 under RPCSEC_GSS, ), the NFS client will start its communication with the server with one of the minimal security triples. During communication with the server, the client may receive an NFS error of NFS4ERR_WRONGSEC. This error allows the server to notify the client that the security triple currently being used is not appropriate for access to the server's file system resources. The client is then responsible for determining what security triples are available at the server and choose one which is appropriate for the client. 2.3.2. SECINFO The new procedure SECINFO (see SECINFO procedure definition) will allow the client to determine, on a per filehandle basis, what security triple is to be used for server access. In general, the client will not have to use the SECINFO procedure except during initial communication with the server or when the client crosses policy boundaries at the server. It could happen that the server's policies change during the client's interaction therefore forcing the client to negotiate a new security triple. 2.4. Alternate Negotiation Technique - SPNEGO It has also been suggested that the SPNEGO protocol defined in [SPNEGO] would also be available for use with RPCSEC_GSS. However, this seems to imply that the NFS server would need to offer all of its resources under the same security mechanism. This needs to be evaluated further as an alternative. Expires: February 1999 [Page 6] Strawman NFS version 4 August 1998 3. File handles The file handle in the NFS protocol is an opaque identifier for a file system object. The server is responsible for translating the file handle to its internal representation of the file system object. The file handle is uniquely identifies a file system object at the NFS server. The client should be able to depend on the fact that a file handle will not be reused once a file system object has been destroyed. If the file handle is reused, the time elapsed before reuse will be very significant. Note that each NFS procedure is defined in terms of its file handle(s) except for the NULL procedure. 3.1. Obtaining the first file handle If each of the meaningful operations of the NFS protocol require a file handle, the client must have a mechanism to obtain the first file handle. With NFS version 2 [RFC1094] and NFS version 3 [RFC1813], there exists an ancillary, protocol to obtain the first file handle. The MOUNT protocol, RPC program number 100005, provides the mechanism of translating a string based file system path name to a file handle which can then be used by the NFS protocols. The MOUNT protocol as currently implemented has deficiencies in the area of security and use via firewalls. This is one reason that the use of the public file handle was introduced [add references to RFCs for WebNFS]. The public file handle is a special case file handle that is used in combination with a path name to avoid using the MOUNT protocol for obtaining the first file handle. With the introduction and use of the public file handle in the previous versions of NFS, it has been shown that the MOUNT protocol is unnecessary for viable interaction between the client and server with the use of file handles. 3.2. The persistent and volatile file handle For the first time in NFS version 4, the file handle constructed by the server can be volatile. In the previous versions of NFS, the server was responsible for ensuring the persistence of the file handle. This meant that as long as a file system object remained in existence at the server the file handle for that object had to be the same each time the client asked for it. This persistent quality eased the implementation at the client in the event of server restart or failure and recovery. For some servers, fulfilling the persistent requirement has been straight forward; for others it has been difficult and affected at best performance and at worst correctness. The existence of the volatile file handle requires the client to implement a method of recovering from the expiration of a file Expires: February 1999 [Page 7] Strawman NFS version 4 August 1998 handle. Most commonly the client will need to store the component names associated with the file system object in question. With these names, the client will be able to recover by finding a file handle in the name space that is still available or by starting at the root of the server's file system name space. The use of a volatile file handle provides these advantages: o Allows or eases the server implementation requirements o Server can provide extended services more easily with the use of volatile file handles (HSM software, file system reorganization) o Others??? NOTE: Need to describe a method of identifying a file handle as persistent or volatile (In the file handle itself?). Also need a discussion of when and where a each type of file handle would be used. Also need to extend the list of examples of what things volatile file handles enable (or remove the list altogether). Note: A question has arisen about the server's ability to return a correct error code (NFS4ERR_STALE vs. NFS4ERR_EXPIRED). One implementation that has been suggested is the following. A volatile file handle, while opaque to the client could contain: volatile bit = 1 | server boot time | slot | generation number slot is an index in the server volatile file handle table. generation number is the generation number for the table entry/slot. If the server boot time is less than the current server boot time, return NFS4ERR_EXPIRED. If slot is out of range, return NFS4ERR_EXPIRED. If the generation number does not match, return NFS4ERR_EXPIRED. When the server reboots, the table is gone (it is volatile). If volatile bit is 0, then it is a persistent file handle with a different structure following it. Expires: February 1999 [Page 8] Strawman NFS version 4 August 1998 4. Basic Data Types Arguments and results from operations will be described in terms of basic XDR types defined in [RFC1832]. The following data types will be defined in terms of basic XDR types: filehandle: opaque <128> An NFS version 4 filehandle. A filehandle with zero length is recognized as a "public" filehandle. utf8string: opaque <> A counted array of octets that contains a UTF-8 string. bitmap: uint32 <> A counted array of 32 bit integers used to contain bit values. The position of the integer in the array that contains bit n can be computed from the expression (n / 32) and its bit within that integer is (n mod 32). 0 1 +-----------+-----------+-----------+-- | count | 31 .. 0 | 63 .. 32 | +-----------+-----------+-----------+-- createverf: opaque<8> Verify used for exclusive create semantics nfstime4 struct nfstime4 { int64_t seconds; uint32_t nseconds; } The nfstime4 structure gives the number of seconds and nanoseconds since midnight or 0 hour January 1, 1970 Coordinated Universal Time (UTC). Values greater than zero for the seconds field denote dates after the 0 hour January 1, 1970. Values less than zero for the seconds field denote dates before the 0 hour January 1, 1970. In both cases, the nseconds field is to be added to the seconds field for the final time representation. For example, if the time to be represented is one-half second before 0 hour January 1, 1970, the seconds field would have a value of negative one (-1) and the nseconds fields would have a value of one-half second (500000000). Values greater than Expires: February 1999 [Page 9] Strawman NFS version 4 August 1998 999,999,999 for nseconds are considered invalid. This data type is used to pass time and date information. A server converts to and from local time when processing time values, preserving as much accuracy as possible. If the precision of timestamps stored for a file system object is less than defined, loss of precision can occur. An adjunct time maintenance protocol is recommended to reduce client and server time skew. specdata4 struct specdata4 { uint32_t specdata1; uint32_t specdata2; } This data type represents additional information for the device file types NFCHR and NFBLK. Note: This is used for the rdev attribute. Is this the correct representation or should this be considered an extended/named attribute for a file. Is there some other solution? Expires: February 1999 [Page 10] Strawman NFS version 4 August 1998 5. File Attributes Previous versions of the NFS protocol supported only the set of POSIX file attributes. Posix V2 Fattr V3 Fattr3 ----- -------- --------- - type type st_mode mode mode st_ino fileid fileid st_dev fsid fsid st_rdev rdev rdev st_nlink nlink nlink st_uid uid uid st_gid gid gid st_size size size - - used st_atime atime atime st_mtime mtime mtime st_ctime ctime ctime st_blksize blocks - st_blocks blocksize - This fixed set of attributes has been limiting: o There is no way to add new attributes without revising the protocol. This penalizes file systems and/or operating systems that support attributes that do not map into the POSIX set. o Not all file systems or operating systems support the full range of POSIX attributes. The server is required to "invent" approximate values for attributes that it does not support. The client does not know that the server doesn't support these values. o Attributes cannot be obtained individually. If the client needs to obtain only one attribute it must request them all. Some of those attributes may be computationally expensive for the server to return. o The set of supported attributes may vary depending on the type of file system object. Additionally, previous versions of the protocol required multiple attribute spaces for files (GETATTR) Expires: February 1999 [Page 11] Strawman NFS version 4 August 1998 and file systems (FSINFO, FSSTAT, PATHCONF) which heavily favored POSIX-based file systems. To overcome these limitations NFS version 4 supports an attribute model with the following features: o Extensibility. New attributes can be added in incremental revisions of the protocol. o For each file system object the client can determine which attributes are supported. o The client can select the attributes it needs. 5.1. Defining Attributes Each attribute is assigned a unique integer which corresponds to a position in a bitmap. When requesting or setting attributes the client sets the appropriate bits in the bitmap to identify the attributes. Similarly, when returning attributes the server returns a bitmap that identifies the attributes returned. The sequence of attributes in a request or reply must follow the 5.2. File Attribute Bits Name: type Data type: uint32 Description: Type of file. Note: Some of these are now handled by accessbits. Need to represent Unix perm bits as an ACL Name: mode Data type: uint32 Description: Protection mode bits The mode bits are defined as follows: Expires: February 1999 [Page 12] Strawman NFS version 4 August 1998 0x00800 Set user ID on execution. 0x00400 Set group ID on execution. 0x00200 Save swapped text (not defined in POSIX). 0x00100 Read permission for owner. 0x00080 Write permission for owner. 0x00040 Execute permission for owner on a file. Or lookup (search) permission for owner in directory. 0x00020 Read permission for group. 0x00010 Write permission for group. 0x00008 Execute permission for group on a file. Or lookup (search) permission for group in directory. 0x00004 Read permission for others. 0x00002 Write permission for others. 0x00001 Execute permission for others on a file. Or lookup (search) permission for others in directory. Name: accessbits Data type: uint32 Description: 0x0001 READ. Read data from file or read a directory. 0x0002 LOOKUP. Look up a name in a directory (no meaning for non-directory objects). 0x0004 MODIFY. Rewrite existing file data or modify existing directory entries. 0x0008 EXTEND. Write new data or add directory entries. 0x0010 DELETE. Delete an existing directory entry. 0x0020 EXECUTE. Expires: February 1999 [Page 13] Strawman NFS version 4 August 1998 Execute file (no meaning for a directory). Name: nlink Data type: uint32 Description: Number of hard links to the file - that is, the number of different names for the same file. If a modification is made to data within a file and the file has a nlink value greater than 1, then the modifications will appear under each of the names for the file. Name: uid Data type: utf8string Description: Identifier of the owner of the file. Name: gid Data type: utf8string Description: Identifier of the group of the file. NOTE: The string representation for the user and group identifiers of a file are provided to include support for user identifiers beyond the scope of the traditional Unix uid/gid name space. The contents of the user and group identifier should be defined or have strong recommendations. One suggestion for user identifier might be user@domain. To translate a traditional Unix uid the representation may be something like 123456@uid. Name: size Data type: uint64 Description: Size of the file in bytes. Expires: February 1999 [Page 14] Strawman NFS version 4 August 1998 Name: used Data type: uint64 Description: Number of bytes of disk space that the file actually uses (which can be smaller because the file may have holes or it may be larger due to fragmentation). Name: rdev Data type: specdata4 Description: Describes the device file if the file type is NF4CHR or NF4BLK. For all other file types, this attribute is undefined. If this attribute is returned from the server for file types other than NF4CHR and NF4BLK, the client should consider the values to be zero. Name: fsid Data type: uint64 Description: The file system identifier for the file system. This identifier is expected to uniquely identify the file system at the server. NOTE: The unique quality of the fsid will indicate to the client that certain operations will fail if the source and target of the operation are located on different fsids. A RENAME is a good example of this. If the source and destination directories have different fsid values at the server then the RENAME operation will fail. This type of failure mode needs to be determined and documented for all procedures. Name: fileid Data type: uint32 Description: A number which uniquely identifies the file within the file system. On UNIX this would be the inode number. Expires: February 1999 [Page 15] Strawman NFS version 4 August 1998 Note: Are the fsid and fileid data types large enough for unique identifiers? Are there environments that something more is needed. Name: atime Data type: nfstime4 Description: The time when the file data was last accessed. Name: mtime Data type: nfstime4 Description: The time when the file data was last modified. Note: In the case that a file is updated twice within the granularity of the server's mtime, what is the server supposed to do? Is it supposed to increase the mtime nseconds field to signify that a change has occurred? In the case that mtime is not kept for certain file system objects, what is the server supposed to do with the object is updated? Is mtime sufficient or should there be another opaque attribute that can be used by the server to fulfill the client's need to know if the file system object has been updated. Name: ctime Data type: nfstime4 Description: The time when the attributes of the file were last changed. Writing to the file changes the ctime in addition to the mtime. Name: rtmax Data type: uint32 Expires: February 1999 [Page 16] Strawman NFS version 4 August 1998 Description: The maximum size in bytes of a READ request supported by the server. Any READ with a number greater than rtmax will result in a short read of rtmax bytes or less. Name: wtmax Data type: uint32 Description: The maximum size of a WRITE request supported by the server. In general, the client is limited by wtmax since there is no guarantee that a server can handle a larger write. Any WRITE with a count greater than wtmax will result in a short write of at most wtmax bytes. Name: maxfilesize Data type: uint64 Description: The maximum size of a file on the file system. Name: time_delta Data type: nfstime4 Description: The server time granularity. When setting a file time using SETATTR, the server guarantees only to preserve times to this accuracy. If this is {0, 1}, the server can support nanosecond times, {0, 1000000} denotes millisecond precision, and {1, 0} indicates that times are accurate only to the nearest second. Note: Should there be more granularity definitions or a general scheme devised for this? Is this attribute necessary at all? If there are mechanisms to ensure that modification times are recorded correctly or at least recorded in such a way to signify that a modification has occurred, is this attribute needed? Expires: February 1999 [Page 17] Strawman NFS version 4 August 1998 Name: linkmax Data type: uint32 Description: The maximum number of hard links to an object. Name: name_max Data type: uint32 Description: The maximum length of a component of a filename. Name: change Data type: uint64 Description: A value created by the server that the client can use to determine if a file data, directory contents or attributes have been modified. The server can just return the file mtime in this field though if a more precise value exists then it can be substituted, for instance, a checksum or sequence number. Name: properties Data type: uint32 Description: A bit mask of file system properties. The following values are defined: FSF_LINK If this bit is 1 (TRUE), the file system supports hard links. FSF_SYMLINK If this bit is 1 (TRUE), the file system supports symbolic links. FSF_HOMOGENEOUS If this bit is 1 (TRUE), the information in the properties attributes is identical for every file and directory in the file system. If it is 0 (FALSE), the client should retrieve properties information for each file and directory as required. Expires: February 1999 [Page 18] Strawman NFS version 4 August 1998 FSF_CANSETTIME If this bit is 1 (TRUE), the server will set the times for a file via SETATTR if requested (to the accuracy indicated by time_delta). If it is 0 (FALSE), the server cannot set times as requested. FSF_NOTRUNC If this bit is 1 (TRUE), the server will reject any request that includes a name longer than name_max with the error, NFS4ERR_NAMETOOLONG. If FALSE, any length name over name_max bytes will be silently truncated to name_max bytes. FSF_CHOWN_RESTRICTED If this bit is 1 (TRUE), the server will reject any request to change either the owner or the group associated with a file if the caller is not the privileged user. (UID 0) FSF_CASE_INSENSITIVE If this bit is 1 (TRUE), the server file system does not distinguish case when interpreting filenames. FSF_CASE_PRESERVING If this bit is 1 (TRUE), the server will preserve the case of a name during the creation of a file system object. (i.e. CREATE, MKDIR, MKNOD, SYMLINK, RENAME or LINK operation) For FSF_CHOWN_RESTRICTED, what should be done with the privileged user definition in face of a non-numeric uid/gid. Expires: February 1999 [Page 19] Strawman NFS version 4 August 1998 6. Defined Error Numbers NFS error numbers are assigned to failed operations within a compound request. A compound request contains a number of NFS operations that have their results encoded in sequence in a compound reply. The results of successful operations will consist of an NFS4_OK status followed by the encoded results of the operation. If an NFS operation fails, an error status will be entered in the reply and the compound request will be terminated. A description of each defined error follows: NFS4_OK Indicates the operation completed successfully. NFS4ERR_PERM Not owner. The operation was not allowed because the caller is either not a privileged user (root) or not the owner of the target of the operation. NFS4ERR_NOENT No such file or directory. The file or directory name specified does not exist. NFS4ERR_IO I/O error. A hard error (for example, a disk error) occurred while processing the requested operation. NFS4ERR_NXIO I/O error. No such device or address. NFS4ERR_ACCES Permission denied. The caller does not have the correct permission to perform the requested operation. Contrast this with NFS4ERR_PERM, which restricts itself to owner or privileged user permission failures. NFS4ERR_DENIED An attempt to lock a file is denied. Since this may be a temporary condition, the client is encouraged to retry the lock request (with exponential backoff of timeout) until the lock is accepted. NFS4ERR_EXIST File exists. The file specified already exists. Expires: February 1999 [Page 20] Strawman NFS version 4 August 1998 NFS4ERR_XDEV Attempt to do a cross-device hard link. NFS4ERR_NODEV No such device. NFS4ERR_NOTDIR Not a directory. The caller specified a non- directory in a directory operation. NFS4ERR_ISDIR Is a directory. The caller specified a directory in a non-directory operation. NFS4ERR_INVAL Invalid argument or unsupported argument for an operation. Two examples are attempting a READLINK on an object other than a symbolic link or attempting to SETATTR a time field on a server that does not support this operation. NFS4ERR_FBIG File too large. The operation would have caused a file to grow beyond the server's limit. NFS4ERR_NOSPC No space left on device. The operation would have caused the server's file system to exceed its limit. NFS4ERR_ROFS Read-only file system. A modifying operation was attempted on a read-only file system. NFS4ERR_MLINK Too many hard links. NFS4ERR_NAMETOOLONG The filename in an operation was too long. NFS4ERR_NOTEMPTY An attempt was made to remove a directory that was not empty. NFS4ERR_DQUOT Resource (quota) hard limit exceeded. The user's resource limit on the server has been exceeded. Expires: February 1999 [Page 21] Strawman NFS version 4 August 1998 NFS4ERR_LOCKED A read or write operation was attempted on a locked file. NFS4ERR_STALE Invalid file handle. The file handle given in the arguments was invalid. The file referred to by that file handle no longer exists or access to it has been revoked. NFS4ERR_BADHANDLE Illegal NFS file handle. The file handle failed internal consistency checks. NFS4ERR_NOT_SYNC Update synchronization mismatch was detected during a SETATTR operation. NFS4ERR_BAD_COOKIE READDIR cookie is stale. NFS4ERR_NOTSUPP Operation is not supported. NFS4ERR_TOOSMALL Buffer or request is too small. NFS4ERR_SAME Returned if an NVERIFY operation shows that no attributes have changed. NFS4ERR_SERVERFAULT An error occurred on the server which does not map to any of the legal NFS version 4 protocol error values. The client should translate this into an appropriate error. UNIX clients may choose to translate this to EIO. NFS4ERR_BADTYPE An attempt was made to create an object of a type not supported by the server. NFS4ERR_JUKEBOX The server initiated the request, but was not able to complete it in a timely fashion. The client should wait and then try the request with a new RPC transaction ID. For example, this error should be returned from a server that supports hierarchical storage and receives a Expires: February 1999 [Page 22] Strawman NFS version 4 August 1998 request to process a file that has been migrated. In this case, the server should start the immigration process and respond to client with this error. NFS4ERR_FHEXPIRED The file handle provided is volatile and has expired at the server. The client should attempt to recover the new file handle by traversing the server's file system name space. The file handle may have expired because the server has restarted, the file system object has been removed, or the file handle has been flushed from the server's internal mappings. NOTE: This error definition will need to be crisp and match the section describing the volatile file handles. NFS4ERR_WRONGSEC THe security mechanism being used by the client for the procedure does not match the server's security policy. The client should change the security mechanism being used and retry the operation. Expires: February 1999 [Page 23] Strawman NFS version 4 August 1998 7. Compound Requests NFS version 4 allows a client to combine multiple NFS operations into a single request. Compound requests provide: o Good performance on high latency networks If a client can combine multiple, dependent operations into a single request then it can avoid the cumulative latency in many request/response round-trips across the network. This is particularly important on the Internet or through geosynchronous satellite connections. o Protocol simplification Clients can build NFS requests of arbitrary complexity from more primitive operations. These requests can be tailored to the unique needs of each client. A compound request looks like this: +-----------+-----------+-----------+-- | op + args | op + args | op + args | +-----------+-----------+-----------+-- and the reply looks like this: +----------------+----------------+----------------+-- | code + results | code + results | code + results | +----------------+----------------+----------------+-- Where "code" is an indication of the success or failure of the operation including the opcode itself. Expires: February 1999 [Page 24] Strawman NFS version 4 August 1998 8. NFS Version 4 Requests Nearly all NFS version 4 operations are defined as compound operations - not as RPC procedures. There is a single RPC procedure for all compound requests. NOTE: Let's imagine procedure 1 is defined as a compound request. Procedure 2 might be a proxied compound request, i.e. a compound request with a header that identifies the target server. 8.1. Evaluation of a Compound Request NOTE: A useful initial prefix on a compound request sequence would be a string that summarizes the content of the compound request for the benefit of packet sniffers like snoop and engineers debugging implementations. The server evaluates the operations in sequence. Each operation consists of a 32 bit operation code, followed by a sequence of arguments of length determined by the type of operation. The results of each operation are encoded in sequence into a reply buffer. The results of each operation are preceded by the opcode and a status code (normally zero). If an operation fails a non-zero status code will be encoded, evaluation of the compound request will halt, and the reply will be returned. The client is responsible for recovering from any partially completed compound request. Each operation assumes a "current" filehandle that is available as part of the execution context of the compound request. Operations may set, change, or return this filehandle. Expires: February 1999 [Page 25] Strawman NFS version 4 August 1998 9. NFS Version 4 Procedures Expires: February 1999 [Page 26] Strawman NFS version 4 August 1998 9.1. Procedure 0: NULL - No operation SYNOPSIS (cfh) -> (cfh) ARGS (none) RESULTS (none) DESCRIPTION The server does no work other than to return a NFS_OK result in the reply. ERRORS (none) Expires: February 1999 [Page 27] Strawman NFS version 4 August 1998 9.2. Procedure 1: ACCESS - Check Access Permission SYNOPSIS (cfh), permbits -> permbits ARGS permbits: uint32 RESULTS permbits: uint32 DESCRIPTION ACCESS determines the access rights that a user, as identified by the credentials in the request, has with respect to a file system object. The client encodes the set of permissions that are to be checked in a bit mask. The server checks the permissions encoded in the bit mask. A status of NFS4_OK is returned along with a bit mask encoded with the permissions that the client is allowed. The results of this procedure are necessarily advisory in nature. That is, a return status of NFS4_OK and the appropriate bit set in the bit mask does not imply that such access will be allowed to the file system object in the future, as access rights can be revoked by the server at any time. The following access permissions may be requested: ACCESS_READ: bit 0 Read data from file or read a directory. ACCESS_LOOKUP: bit 1 Look up a name in a directory (no meaning for non-directory objects). ACCESS_MODIFY: bit 2 Rewrite existing file data or modify existing directory entries. ACCESS_EXTEND: bit 3 Write new data or add directory entries. ACCESS_DELETE: bit 4 Delete an existing directory entry. ACCESS_EXECUTE: bit 5 Execute file (no meaning for a directory). Expires: February 1999 [Page 28] Strawman NFS version 4 August 1998 The server must return an error if the any access permission cannot be determined. IMPLEMENTATION In general, it is not sufficient for the client to attempt to deduce access permissions by inspecting the uid, gid, and mode fields in the file attributes, since the server may perform uid or gid mapping or enforce additional access control restrictions. It is also possible that the NFS version 4 protocol server may not be in the same ID space as the NFS version 4 protocol client. In these cases (and perhaps others), the NFS version 4 protocol client can not reliably perform an access check with only current file attributes. In the NFS version 2 protocol, the only reliable way to determine whether an operation was allowed was to try it and see if it succeeded or failed. Using the ACCESS procedure in the NFS version 4 protocol, the client can ask the server to indicate whether or not one or more classes of operations are permitted. The ACCESS operation is provided to allow clients to check before doing a series of operations. This is useful in operating systems (such as UNIX) where permission checking is done only when a file or directory is opened. This procedure is also invoked by NFS client access procedure (called possibly through access(2)). The intent is to make the behavior of opening a remote file more consistent with the behavior of opening a local file. The information returned by the server in response to an ACCESS call is not permanent. It was correct at the exact time that the server performed the checks, but not necessarily afterwards. The server can revoke access permission at any time. The NFS version 4 protocol client should use the effective credentials of the user to build the authentication information in the ACCESS request used to determine access rights. It is the effective user and group credentials that are used in subsequent read and write operations. Many implementations do not directly support the ACCESS_DELETE permission. Operating systems like UNIX will ignore the ACCESS_DELETE bit if set on an access request on a non-directory object. In these systems, delete permission on a file is determined by the access permissions on the directory in which the file resides, instead of being determined by the permissions of the file itself. Thus, the bit mask returned for such a request will have the ACCESS_DELETE bit set to 0, indicating that the client does not have this permission. Expires: February 1999 [Page 29] Strawman NFS version 4 August 1998 ERRORS NFS4ERR_IO NFS4ERR_SERVERFAULT SEE GETATTR. Expires: February 1999 [Page 30] Strawman NFS version 4 August 1998 9.3. Procedure 2: COMMIT - Commit cached data SYNOPSIS (cfh), offset, count -> verifier Procedure COMMIT forces or flushes data to stable storage that was previously written with a WRITE operation with the stable field set to UNSTABLE. ARGS offset: uint64 The position within the file at which the flush is to begin. An offset of 0 means to flush data starting at the beginning of the file. count: uint32 The number of bytes of data to flush. If count is 0, a flush from offset to the end of file is done. RESULTS verifier: uint32 This is a cookie that the client can use to determine whether the server has rebooted between a call to WRITE and a subsequent call to COMMIT. This cookie must be consistent during a single boot session and must be unique between instances of the NFS version 4 protocol server where uncommitted data may be lost. IMPLEMENTATION Procedure COMMIT is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk, that is it flushes the file's data and metadata to disk. COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk for the specified file. Like fsync(2), it may be that there is some modified data or no modified data to synchronize. The data may have been synchronized by the server's normal periodic buffer synchronization activity. COMMIT will always return NFS4_OK, unless there has been an unexpected error. COMMIT differs from fsync(2) in that it is possible for the client Expires: February 1999 [Page 31] Strawman NFS version 4 August 1998 to flush a range of the file (most likely triggered by a buffer- reclamation scheme on the client before file has been completely written). The server implementation of COMMIT is reasonably simple. If the server receives a full file COMMIT request, that is starting at offset 0 and count 0, it should do the equivalent of fsync()'ing the file. Otherwise, it should arrange to have the cached data in the range specified by offset and count to be flushed to stable storage. In both cases, any metadata associated with the file must be flushed to stable storage before returning. It is not an error for there to be nothing to flush on the server. This means that the data and metadata that needed to be flushed have already been flushed or lost during the last server failure. The client implementation of COMMIT is a little more complex. There are two reasons for wanting to commit a client buffer to stable storage. The first is that the client wants to reuse a buffer. In this case, the offset and count of the buffer are sent to the server in the COMMIT request. The server then flushes any cached data based on the offset and count, and flushes any metadata associated with the file. It then returns the status of the flush and the verf verifier. The other reason for the client to generate a COMMIT is for a full file flush, such as may be done at close. In this case, the client would gather all of the buffers for this file that contain uncommitted data, do the COMMIT operation with an offset of 0 and count of 0, and then free all of those buffers. Any other dirty buffers would be sent to the server in the normal fashion. This implementation will require some modifications to the buffer cache on the client. After a buffer is written with stable UNSTABLE, it must be considered as dirty by the client system until it is either flushed via a COMMIT operation or written via a WRITE operation with stable set to FILE_SYNC or DATA_SYNC. This is done to prevent the buffer from being freed and reused before the data can be flushed to stable storage on the server. When a response comes back from either a WRITE or a COMMIT operation that contains an unexpected verf, the client will need to retransmit all of the buffers containing uncommitted cached data to the server. How this is to be done is up to the implementor. If there is only one buffer of interest, then it should probably be sent back over in a WRITE request with the appropriate stable flag. If there more than one, it might be worthwhile retransmitting all of the buffers in WRITE requests with stable set to UNSTABLE and then retransmitting the COMMIT operation to flush all of the data on the server to stable Expires: February 1999 [Page 32] Strawman NFS version 4 August 1998 storage. The timing of these retransmissions is left to the implementor. The above description applies to page-cache-based systems as well as buffer-cache-based systems. In those systems, the virtual memory system will need to be modified instead of the buffer cache. ERRORS NFS4ERR_IO NFS4ERR_LOCKED NFS4ERR_SERVERFAULT SEE WRITE. Expires: February 1999 [Page 33] Strawman NFS version 4 August 1998 9.4. Procedure 3: CREATE - Create a filesystem object SYNOPSIS (cfh), name, type, how -> (cfh) ARGS name: utf8string objtype: filetype how: union UNCHECKED: GUARDED: attrbits: bitmap attrvals EXCLUSIVE: verifier: createverf RESULTS (cfh): filehandle DESCRIPTION Procedure CREATE creates an object in a directory with a given name. The objtype determines the type of object to be created: directory, regular file, etc. The how union may have a value of UNCHECKED, GUARDED, and EXCLUSIVE. UNCHECKED means that the object should be created without checking for the existence of a duplicate object in the same directory. In this case, attrbits and attrvals describe the initial attributes for the file. GUARDED specifies that the server should check for the presence of a duplicate object before performing the create and should fail the request with NFS4ERR_EXIST if a duplicate object exists. If the object does not exist, the request is performed as described for UNCHECKED. EXCLUSIVE specifies that the server is to follow exclusive creation semantics, using the verifier to ensure exclusive creation of the target. No attributes may be provided in this case, since the server may use the target object metadata to store the verifier. Expires: February 1999 [Page 34] Strawman NFS version 4 August 1998 The current filehandle is replaced by that of the new object. IMPLEMENTATION The CREATE procedure carries support for EXCLUSIVE create forward from NFS version 3. As in NFS version 3, this mechanism provides reliable exclusive creation. Exclusive create is invoked when the how parameter is EXCLUSIVE. In this case, the client provides a verifier that can reasonably be expected to be unique. A combination of a client identifier, perhaps the client network address, and a unique number generated by the client, perhaps the RPC transaction identifier, may be appropriate. If the object does not exist, the server creates the object and stores the verifier in stable storage. For file systems that do not provide a mechanism for the storage of arbitrary file attributes, the server may use one or more elements of the object metadata to store the verifier. The verifier must be stored in stable storage to prevent erroneous failure on retransmission of the request. It is assumed that an exclusive create is being performed because exclusive semantics are critical to the application. Because of the expected usage, exclusive CREATE does not rely solely on the normally volatile duplicate request cache for storage of the verifier. The duplicate request cache in volatile storage does not survive a crash and may actually flush on a long network partition, opening failure windows. In the UNIX local file system environment, the expected storage location for the verifier on creation is the metadata (time stamps) of the object. For this reason, an exclusive object create may not include initial attributes because the server would have nowhere to store the verifier. If the server can not support these exclusive create semantics, possibly because of the requirement to commit the verifier to stable storage, it should fail the CREATE request with the error, NFS4ERR_NOTSUPP. During an exclusive CREATE request, if the object already exists, the server reconstructs the object's verifier and compares it with the verifier in the request. If they match, the server treats the request as a success. The request is presumed to be a duplicate of an earlier, successful request for which the reply was lost and that the server duplicate request cache mechanism did not detect. If the verifiers do not match, the request is rejected with the status, NFS4ERR_EXIST. Once the client has performed a successful exclusive create, it must issue a SETATTR to set the correct object attributes. Until it does so, it should not rely upon any of the object attributes, Expires: February 1999 [Page 35] Strawman NFS version 4 August 1998 since the server implementation may need to overload object metadata to store the verifier. Use of the GUARDED attribute does not provide exactly-once semantics. In particular, if a reply is lost and the server does not detect the retransmission of the request, the procedure can fail with NFS4ERR_EXIST, even though the create was performed successfully. Note: 1. Need to determine an initial set of attributes that must be set, and a set of attributes that can optionally be set, on a per-filetype basis. For instance, if the filetype is a NF4BLK then the device attributes must be set. 2. Need to consider the symbolic link path as an "attribute". No need for a READLINK op if this is so. Similarly, a filehandle could be defined as an attribute for LINK. 3. The presence of a generic create for multiple filetypes makes the protocol easier to extend to new filetypes in a minor rev (without defining new ops) 4. The specific exclusive create semantics can be removed if there is guaranteed support for extended attributes. The client could specify the verifier be stored in an extended attribute and then check the attribute value itself instead of relying on the server to do so. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_EXIST NFS4ERR_NOTDIR NFS4ERR_NOSPC Expires: February 1999 [Page 36] Strawman NFS version 4 August 1998 NFS4ERR_ROFS NFS4ERR_NAMETOOLONG NFS4ERR_DQUOT NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT Expires: February 1999 [Page 37] Strawman NFS version 4 August 1998 9.5. Procedure 4: GETATTR - Get attributes SYNOPSIS (cfh), attrbits -> attrbits, attrvals ARGS attrbits: bitmap RESULTS attrbits: bitmap attrvals: sequence of attributes DESCRIPTION Obtain attributes from the server. The client sets a bit in the bitmap argument for each attribute value that it would like the server to return. The server returns an attribute bitmap that indicates the attribute values that it was able to return, followed by the attribute values ordered lowest attribute number first. The server must return a value for each attribute that the client requests if the attribute is supported by the server. If the server does not support an attribute or cannot approximate a useful value then it must not return the attribute value and must not set the attribute bit in the result bitmap. The server must return an error if it supports an attribute but cannot obtain its value. In that case no attribute values will be returned. All servers must support attribute 0 which is a bitmap of all supported attributes for the filesystem object. IMPLEMENTATION ? ERRORS NFS4ERR_IO NFS4ERR_SERVERFAULT Expires: February 1999 [Page 38] Strawman NFS version 4 August 1998 9.6. Procedure 5: GETFH - Get current filehandle SYNOPSIS (cfh) -> filehandle ARGS RESULTS filehandle: filehandle DESCRIPTION Returns the current filehandle. Operations that change the current filehandle like LOOKUP or CREATE to not automatically return the new filehandle as a result. For instance, if a client needs to lookup a directory entry and obtain its filehandle then the following request will do it: 1: PUTFH (directory filehandle) 2: LOOKUP (entry name) 3: GETFH IMPLEMENTATION ? ERRORS NFS4ERR_SERVERFAULT Expires: February 1999 [Page 39] Strawman NFS version 4 August 1998 9.7. Procedure 6: LINK - Create link to an object SYNOPSIS (cfh), dir, newname -> (cfh) ARGS dir: filehandle newname: utf8string RESULTS (none) DESCRIPTION Procedure LINK creates an additional newname for the file with the current filehandle in the new directory dir file and link.dir must reside on the same file system and server. On entry, the arguments in LINK3args are: IMPLEMENTATION Changes to any property of the hard-linked files are reflected in all of the linked files. When a hard link is made to a file, the attributes for the file should have a value for nlink that is one greater than the value before the LINK. The comments under RENAME regarding object and target residing on the same file system apply here as well. The comments regarding the target name applies as well. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_EXIST NFS4ERR_XDEV NFS4ERR_NOTDIR Expires: February 1999 [Page 40] Strawman NFS version 4 August 1998 NFS4ERR_INVAL NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_MLINK NFS4ERR_NAMETOOLONG NFS4ERR_DQUOT NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT Expires: February 1999 [Page 41] Strawman NFS version 4 August 1998 9.8. Procedure 7: LOCKR - Create a read lock SYNOPSIS (cfh), id, offset, length -> lease ARGS id: uint64 offset: uint64 length: uint64 RESULTS lease: uint32 DESCRIPTION Requested by a client that needs to protect a file extent from change. Other clients may have read locks that overlap the extent completely or partially but no other client or server process will be allowed to modify or create an overlapping write lock on the extent until there are no read or write locks covering any part of the extent. A write lock will be granted only when the leases for conflicting locks have expired, or because all clients have removed their locks. The locked extent is permitted to lie partially or completely beyond the end of the file. The id is a 64-bit value that the client provides to uniquely identify its lock. The server will attempt to match this value with a subsequent LOCKX or LOCKU request. A read-lock will receive an NFS4_DENIED error if another client has requested a write-lock or is holding a write lock on any part of the requested extent. The returned lease time is the time remaining on the lock-holder's lease. IMPLEMENTATION A read lock is mandatory. The server must prevent other clients or local processes from changing the locked extent of the file while the read lock is held. A duplicate read-lock request must be treated as an idempotent operation and must not return an error. Expires: February 1999 [Page 42] Strawman NFS version 4 August 1998 A LOCKR may be combined with a READ in a compound request so that the data be locked and read in a single operation: 1: PUTFH 0x12345 2: LOCKR 123 0,8192 3: READ 0,8192 or perhaps 1: PUTFH 0x12345 2: READ 0,8192 3: LOCKR 123 0,8192 allowing the client to read the data unconditionally yet change its caching strategy depending on whether the lock is granted. ERRORS NFS4ERR_DENIED NFS4ERR_IO NFS4ERR_NXIO NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_SERVERFAULT Expires: February 1999 [Page 43] Strawman NFS version 4 August 1998 9.9. Procedure 8: LOCKW - Create write lock SYNOPSIS (cfh) id, offset, length -> lease ARGS id: uint64 offset: uint64 length: uint64 RESULTS lease: uint32 DESCRIPTION Requested by a client that needs to change a file. While a write lock is held, no other client can read the locked extent or obtain either a read lock or a write lock for the same extent. The locked extent is permitted to lie partially or completely beyond the end of the file. The id is a 64-bit value that the client provides to uniquely identify the lock. The server will attempt to match this value with a subsequent LOCKX or LOCKU request. An NFS4ERR_DENIED error will be returned if one or more other clients are holding a read or write lock on any part of the requested extent. The client should continue to retransmit the lock request (using exponential backoff to avoid server overload) until the request is granted. The client will not be granted a write lock for an extent that overlaps an extent that it has write locked previously. When the server rejects a write lock request it should prevent clients from renewing or obtaining new read locks for the same file for some reasonable period of time. This policy prevents write starvation. IMPLEMENTATION The server might employ a "fairness" scheme to arbitrate between multiple clients attempting write locks, e.g. client lock requests be ordered so that the first requester is given the preference window when the write lock becomes available. NOTE: Could get some interesting dynamics here where there Expires: February 1999 [Page 44] Strawman NFS version 4 August 1998 is much contention for read and write locks on a single file. I haven't begun to think about possible problems when a file becomes popular. I'm concerned that we keep the protocol simple and easy to understand - leaving implementations to focus on topics like "fairness" and "performance." A LOCKW may be combined with a READ in a compound request followed by a subsequent combination of WRITE and LOCKU where the client writes back the updated record/file, e.g. 1: PUTFH 0x12345 2: LOCKW 123 0,8192 3: READ 0,8192 Client updates data, then 1: PUTFH 0x12345 2: LOCKX 123 0,8192 3: WRITE 0,8192 4: LOCKU 123 0,8192 Note the use of a LOCKX to abort the transaction if the lock has been lost. It also seems a reasonable requirement that if a lOCKX is granted that it be valid for at least the duration of the compound request. ERRORS NFS4ERR_DENIED NFS4ERR_ACCES NFS4ERR_SERVERFAULT Expires: February 1999 [Page 45] Strawman NFS version 4 August 1998 9.10. Procedure 9: LOCKT - test for lock SYNOPSIS (cfh), offset, length -> lockstate ARGS offset: uint64 length: uint64 RESULTS lockstate: uint32 DESCRIPTION Requested by a client that needs to establish whether any part of an extent in a file is locked. The server returns one of three lock states: 0 - Unlocked 1 - Read lock held 2 - Write lock held ERRORS NFS4ERR_ACCES NFS4ERR_SERVERFAULT Expires: February 1999 [Page 46] Strawman NFS version 4 August 1998 9.11. Procedure 10: LOCKX - validate and extend lock SYNOPSIS (cfh) id, offset, length, locktype -> lease ARGS id: uint64 offset: uint64 length: uint64 locktype: enum { READLOCK | WRITELOCK } RESULTS lease: uint32 DESCRIPTION Requested by a client that wishes to extend the lease on a read or write lock. The id, offset, and length must match a previous successful LOCKR or LOCKW request. If successful, the server returns the remaining time for the new lease. A LOCKX operation must precede any READ or WRITE operation in the compound request that assumes the extent is locked. This serves two purposes: it assures the client that the server is still holding the lock (the server may have lost the lock for some reason) and it validates to the server that the client holds the lock. Without this validation the server will deny any read or write request on a locked file. An NFS4ERR_EXPIRED error means that the server has lost the lock, or that the client's lease expired before the client could renew it. The client must take appropriate recovery action and request a new lease. A lease could expire if the client attempted a lock extension close to the expiry time and the request was lost or dropped. In that case the retransmission of the extension request might arrive at the server after expiry. IMPLEMENTATION Even though an extension request might arrive after expiry, a benevolent server may grant the extension if it notices that there have been no other changes to the file since the expiry. Expires: February 1999 [Page 47] Strawman NFS version 4 August 1998 Note the server may acknowledge the ownership of the lock but deny a lease extension. In this case the lease time returned will be the time remaining on the original lease. The server must implement a "grace" period after a crash in which it will monitor all requests but respond to none. During the grace period information from LOCKX operations will be used to rebuild lock state. NOTE: Assumption here that if the server can recover very quickly, well within the lease times, then it might use the client's renewal requests to recover lock state. In the case where clients are unable to extend leases because the server is down and their leases expire, they should continue to attempt lease extensions in the hope that the grace period will allow recovery. The worst that can happen is that they miss the grace period, or that they lost the lease because of network partition (or server overload) and the lease extension is denied. ERRORS NFS4ERR_EXPIRED NFS4ERR_ACCES NFS4ERR_SERVERFAULT Expires: February 1999 [Page 48] Strawman NFS version 4 August 1998 9.12. Procedure 11: LOCKU - Unlock file SYNOPSIS (cfh) id, offset, length -> - ARGS id: uint64 offset: uint64 length: uint64 DESCRIPTION Unlock read or write lock for a file extent. The id, offset, and length must match that of a previous successful LOCKR or LOCKW request. An NFS4ERR_EXPIRED error means that the server has no knowledge of the client's lock - most likely the lease expired. In this situation the client may choose to take some recovery action. ERRORS NFS4ERR_EXPIRED NFS4ERR_ACCES NFS4ERR_SERVERFAULT Expires: February 1999 [Page 49] Strawman NFS version 4 August 1998 9.13. Procedure 12: LOOKUP - Lookup filename SYNOPSIS (cfh), filenames -> (cfh) ARGS filename: utf8string[] RESULTS (none) DESCRIPTION The current filehandle is assumed to refer to a directory. LOOKUP evaluates the pathname contained in the array of names and obtains a new current filehandle from the final name. All but the final name in the list must be the names of directories. If the pathname cannot be evaluated either because a component doesn't exist or because the client doesn't have permission to evaluate a component of the path, then an error will be returned and the current filehandle will be unchanged. IMPLEMENTATION If the client prefers a partial evaluation of the path then a sequence of LOOKUP operations can be substituted e.g. 1. PUTFH (directory filehandle) 2. LOOKUP "pub" "foo" "bar" 3. GETFH or 1. PUTFH (directory filehandle) 2. LOOKUP "pub" 3. GETFH 4. LOOKUP "foo" 5. GETFH 6. LOOKUP "bar" 7. GETFH NFS version 4 servers depart from the semantics of previous NFS versions in allowing LOOKUP requests to cross mountpoints on the Expires: February 1999 [Page 50] Strawman NFS version 4 August 1998 server. The client can detect a mountpoint crossing by comparing the fsid attribute of the directory with the fsid attribute of the directory looked up. If the fsids are different then the new directory is a server mountpoint. Unix clients that detect a mountpoint crossing will need to mount the server's filesystem. Servers that limit NFS access to "shares" or "exported" filesystems should provide a pseudo-filesystem into which the exported filesystems can be integrated, so that clients can browse the server's namespace. The clients view of a pseudo filesystem will be limited to paths that lead to exported filesystems. Note: previous versions of the protocol assigned special semantics to the names "." and "..". NFS version 4 assigns no special semantics to these names. The LOOKUPP operator must be used to lookup a parent directory. Note that this procedure does not follow symbolic links. The client is responsible for all parsing of filenames including filenames that are modified by symbolic links encountered during the lookup process. ERRORS NFS4ERR_IO NFS4ERR_NOENT NFS4ERR_ACCES NFS4ERR_NOTDIR NFS4ERR_NAMETOOLONG NFS4ERR_SERVERFAULT SEE CREATE Expires: February 1999 [Page 51] Strawman NFS version 4 August 1998 9.14. Procedure 13: LOOKUPP - Lookup parent directory SYNOPSIS (cfh) -> (cfh) ARGS (none) RESULTS (none) DESCRIPTION The current filehandle is assumed to refer to a directory. LOOKUPP assigns the filehandle for its parent directory to be the current filehandle. If there is no parent directory an ENOENT error must be returned. IMPLEMENTATION As for LOOKUP, LOOKUPP will also cross mountpoints. ERRORS NFS4ERR_IO NFS4ERR_NOENT NFS4ERR_ACCES NFS4ERR_SERVERFAULT SEE CREATE Expires: February 1999 [Page 52] Strawman NFS version 4 August 1998 9.15. Procedure 14: NVERIFY - Verify attributes different SYNOPSIS (cfh), attrbits, attrvals -> - ARGS attrbits: bitmap attrvals: sequence of attributes RESULTS (none) DESCRIPTION This operation is used to prefix a sequence of operations to be performed if one or more attributes have changed on some filesystem object. If all the attributes match then the error NFS4ERR_SAME must be returned. IMPLEMENTATION This operation is useful as a cache validation operator. If the object to which the attributes belong has changed then the following operations may obtain new data associated with that object. For instance, to check if a file has been changed and obtain new data if it has: 1. PUTFH (public) 2. LOOKUP "pub" "foo" "bar" 3. NVERIFY attrbits attrs 4. READ 0 32767 ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_SERVERFAULT NFS4ERR_SAME Expires: February 1999 [Page 53] Strawman NFS version 4 August 1998 9.16. Procedure 15: RESTOREFH - Restore saved filehandle SYNOPSIS (sfh) -> (cfh) ARGS (none) RESULTS (none) DESCRIPTION Make the saved filehandle the current filehandle. If there is no saved filehandle then return an error NFS4ERR_INVAL. IMPLEMENTATION Operators like CREATE and LOOKUP use the current filehandle to represent a directory and replace it with a new filehandle. Assuming the previous filehandle was saved with a SAVEFH operator, the previous filehandle can be restored as the current filehandle. This is commonly used to obtain post-operation attributes for the directory, e.g. 1. PUTFH (directory filehandle) 2. SAVEFH 3. GETATTR attrbits (pre-op dir attrs) 4. CREATE optbits "foo" attrs 5. GETATTR attrbits (file attributes) 6. RESTOREFH 7. GETATTR attrbits (post-op dir attrs) ERRORS NFS4ERR_SERVERFAULT Expires: February 1999 [Page 54] Strawman NFS version 4 August 1998 9.17. Procedure 16: SAVEFH - Save current filehandle SYNOPSIS (cfh) -> (sfh) ARGS (none) RESULTS (none) DESCRIPTION Save the current filehandle. If a previous filehandle was saved then it is no longer accessible. The saved filehandle can be restored as the current filehandle with the RESTOREFH operator. IMPLEMENTATION (see RESTOREFH) ERRORS NFS4ERR_SERVERFAULT Expires: February 1999 [Page 55] Strawman NFS version 4 August 1998 9.18. Procedure 17: PUTFH - Set current filehandle SYNOPSIS filehandle -> (cfh) ARGS filehandle: filehandle RESULTS (none) DESCRIPTION Replaces the current filehandle with the filehandle provided as an argument. If no filehandle has previously been installed as the current filehandle then root filehandle is assumed. If the length of the filehandle is zero, it is recognized by the server as a "public" filehandle. IMPLEMENTATION Commonly used as the first operator in any NFS request to set the context for following operations. ERRORS NFS4ERR_BADHANDLE NFS4ERR_SERVERFAULT Expires: February 1999 [Page 56] Strawman NFS version 4 August 1998 9.19. Procedure 18: PUTROOTFH - Set root filehandle SYNOPSIS - -> (cfh) ARGS (none) RESULTS (none) DESCRIPTION Replaces the current filehandle with the filehandle that represents the root of the server's namespace. From this filehandle a LOOKUP operation can locate any other filehandle on the server. This filehandle may be different from the "public" filehandle which may be associated with some other directory on the server. IMPLEMENTATION Commonly used as the first operator in any NFS request to set the context for following operations. ERRORS NFS4ERR_SERVERFAULT Expires: February 1999 [Page 57] Strawman NFS version 4 August 1998 9.20. Procedure 19: READ - Read from file SYNOPSIS (cfh), offset, count -> eof, data ARGS offset: uint64 count: uint32 RESULTS eof: bool data: opaque <> DESCRIPTION READ reads data from the file identified by the current filehandle. offset The position within the file at which the read is to begin. An offset of 0 means to read data starting at the beginning of the file. If offset is greater than or equal to the size of the file, the status, NFS4_OK, is returned with count set to 0 and eof set to TRUE, subject to access permissions checking. count The number of bytes of data that are to be read. If count is 0, the READ will succeed and return 0 bytes of data, subject to access permissions checking. count must be less than or equal to the value of the rtmax for the file system that contains file. If greater, the server may return only rtmax bytes, resulting in a short read. If the operation is successful the results are: eof If the read ended at the end-of-file (formally, in a correctly formed READ request, if offset + count is equal to Expires: February 1999 [Page 58] Strawman NFS version 4 August 1998 the size of the file), eof is returned as TRUE; otherwise it is FALSE. A successful READ of an empty file will always return eof as TRUE. data The counted data read from the file. IMPLEMENTATION It is possible for the server to return fewer than count bytes of data. If the server returns less than the count requested and eof set to FALSE, the client should issue another READ to get the remaining data. A server may return less data than requested under several circumstances. The file may have been truncated by another client or perhaps on the server itself, changing the file size from what the requesting client believes to be the case. This would reduce the actual amount of data available to the client. It is possible that the server may back off the transfer size and reduce the read request return. Server resource exhaustion may also occur necessitating a smaller read return. If the file is locked the server will return an NFS4ERR_LOCKED error. Since the lock may be of short duration, the client may choose to retransmit the READ request (with exponential backoff) until the operation succeeds. ERRORS NFS4ERR_IO NFS4ERR_NXIO NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_LOCKED NFS4ERR_SERVERFAULT Expires: February 1999 [Page 59] Strawman NFS version 4 August 1998 9.21. Procedure 20: READDIR - Read directory SYNOPSIS (cfh), cookie, dircount, maxcount, attrbits -> { cookie, filename, attrbits, attributes }... ARGS cookie: uint64 This should be set to 0 in the first request to read the directory. On subsequent requests, it should be a cookie as returned by the server. dircount: uint32 The maximum number of bytes of directory information returned. This number should not include the size of the attributes and file handle portions of the result. maxcount: uint32 The maximum size of the result in bytes. The size must include all XDR overhead. The server is free to return less than count bytes of data. attrbits: bitmap The attributes to be returned for each directory entry. RESULTS A list of directory entries. Each entry contains: cookie: uint64 A value recognized by the server as a "bookmark" into the directory. It may be an offset or an index into a table. Ideally, the cookie value should not change if the directory is modified. filename: utf8string; The name of the directory entry. attrbits: bitmap Expires: February 1999 [Page 60] Strawman NFS version 4 August 1998 A bitmap that indicates which attributes follow. Ideally this bitmap will be identical to the attribute bitmap in the arguments, i.e. the server returns everything the client asked for. However, the returned bitmap may be different if the server does not support the attribute or if the attribute is not valid for the filetype. Note: need to consider the file handle as an "attribute" that may be optionally returned. The concept of file handle as attribute might also be useful for the CREATE of a hard link. DESCRIPTION Procedure READDIR retrieves a variable number of entries from a file system directory and returns complete information about each entry along with information to allow the client to request additional directory entries in a subsequent READDIR. IMPLEMENTATION Issues that need to be understood for this procedure include increased cache flushing activity on the client (as new file handles are returned with names which are entered into caches) and over-the-wire overhead versus expected subsequent LOOKUP and GETATTR elimination. The dircount and maxcount fields are included as an optimization. Consider a READDIR call on a UNIX operating system implementation for 1048 bytes; the reply does not contain many entries because of the overhead due to attributes and file handles. An alternative is to issue a READDIR call for 8192 bytes and then only use the first 1048 bytes of directory information. However, the server doesn't know that all that is needed is 1048 bytes of directory information (as would be returned by READDIR). It sees the 8192 byte request and issues a VOP_READDIR for 8192 bytes. It then steps through all of those directory entries, obtaining attributes and file handles for each entry. When it encodes the result, the server only encodes until it gets 8192 bytes of results which include the attributes and file handles. Thus, it has done a larger VOP_READDIR and many more attribute fetches than it needed to. The ratio of the directory entry size to the size of the attributes plus the size of the file handle is usually at least 8 to 1. The server has done much more work than it needed to. The solution to this problem is for the client to provide two counts to the server. The first is the number of bytes of Expires: February 1999 [Page 61] Strawman NFS version 4 August 1998 directory information that the client really wants, dircount. The second is the maximum number of bytes in the result, including the attributes and file handles, maxcount. Thus, the server will issue a VOP_READDIR for only the number of bytes that the client really wants to get, not an inflated number. This should help to reduce the size of VOP_READDIR requests on the server, thus reducing the amount of work done there, and to reduce the number of VOP_LOOKUP, VOP_GETATTR, and other calls done by the server to construct attributes and file handles. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_NOTDIR NFS4ERR_BAD_COOKIE NFS4ERR_TOOSMALL NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT Expires: February 1999 [Page 62] Strawman NFS version 4 August 1998 9.22. Procedure 21: READLINK - Read symbolic link SYNOPSIS (cfh) -> linktext ARGS (none) RESULTS linktext: utf8string DESCRIPTION READLINK reads the data associated with a symbolic link. The data is a UTF-8 string that is opaque to the server. That is, whether created by an NFS client or created locally on the server, the data in a symbolic link is not interpreted when created, but is simply stored. IMPLEMENTATION A symbolic link is nominally a pointer to another file. The data is not necessarily interpreted by the server, just stored in the file. It is possible for a client implementation to store a path name that is not meaningful to the server operating system in a symbolic link. A READLINK operation returns the data to the client for interpretation. If different implementations want to share access to symbolic links, then they must agree on the interpretation of the data in the symbolic link. The READLINK operation is only allowed on objects of type, NFLNK. The server should return the error, NFS4ERR_INVAL, if the object is not of type, NFLNK. ERRORS NFS4ERR_IO NFS4ERR_INVAL NFS4ERR_ACCES NFS4ERR_NOTSUPP Expires: February 1999 [Page 63] Strawman NFS version 4 August 1998 NFS4ERR_SERVERFAULT Expires: February 1999 [Page 64] Strawman NFS version 4 August 1998 9.23. Procedure 22: REMOVE - Remove filesystem object SYNOPSIS (cfh), filename -> - ARGS entryname: utf8string RESULTS (none) DESCRIPTION REMOVE removes (deletes) a directory entry named by filename from the directory corresponding to the current filehandle. If the entry in the directory was the last reference to the corresponding file system object, the object may be destroyed. IMPLEMENTATION NFS versions 2 and 3 required a different operator RMDIR for directory removal. NFS version 4 REMOVE can be used to delete any directory entry independent of its filetype. The concept of last reference is server specific. However, if the nlink field in the previous attributes of the object had the value 1, the client should not rely on referring to the object via a file handle. Likewise, the client should not rely on the resources (disk space, directory entry, and so on.) formerly associated with the object becoming immediately available. Thus, if a client needs to be able to continue to access a file after using REMOVE to remove it, the client should take steps to make sure that the file will still be accessible. The usual mechanism used is to use RENAME to rename the file from its old name to a new hidden name. ERRORS NFS4ERR_NOENT NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_NOTDIR Expires: February 1999 [Page 65] Strawman NFS version 4 August 1998 NFS4ERR_NAMETOOLONG NFS4ERR_ROFS NFS4ERR_NOTEMPTY NFS4ERR_SERVERFAULT Expires: February 1999 [Page 66] Strawman NFS version 4 August 1998 9.24. Procedure 23: RENAME - Rename directory entry SYNOPSIS (cfh), oldname, newdir, newname -> - ARGS oldname: utf8string newdir: filehandle newname: utf8string RESULTS status: uint32 DESCRIPTION RENAME renames the directory identified by oldname in the directory corresponding to the current filehandle to newname in directory newdir. The operation is required to be atomic to the client. Source and target directories must reside on the same file system on the server. If the directory, newdir, already contains an entry with the name, newname, the source object must be compatible with the target: either both are non-directories or both are directories and the target must be empty. If compatible, the existing target is removed before the rename occurs. If they are not compatible or if the target is a directory but not empty, the server should return the error, NFS4ERR_EXIST. IMPLEMENTATION The RENAME operation must be atomic to the client. The statement "source and target directories must reside on the same file system on the server" means that the fsid fields in the attributes for the directories are the same. If they reside on different file systems, the error, NFS4ERR_XDEV, is returned. Even though the operation is atomic, the status, NFS4ERR_MLINK, may be returned if the server used a "unlink/link/unlink" sequence internally. A file handle may or may not become stale on a rename. However, server implementors are strongly encouraged to attempt to keep file handles from becoming stale in this fashion. Expires: February 1999 [Page 67] Strawman NFS version 4 August 1998 On some servers, the filenames, "." and "..", are illegal as either oldname or newname. In addition, neither oldname nor newname can be an alias for the source directory. These servers will return the error, NFS4ERR_INVAL, in these cases. If oldname and newname both refer to the same file (they might be hard links of each other), then RENAME should perform no action and return success. ERRORS NFS4ERR_NOENT NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_EXIST NFS4ERR_XDEV NFS4ERR_NOTDIR NFS4ERR_ISDIR NFS4ERR_INVAL NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_MLINK NFS4ERR_NAMETOOLONG NFS4ERR_NOTEMPTY NFS4ERR_DQUOT NFS4ERR_NOTSUPP NFS4ERR_SERVERFAULT Expires: February 1999 [Page 68] Strawman NFS version 4 August 1998 9.25. Procedure 24: SETATTR - Set attributes SYNOPSIS (cfh), attrbits, attrvals -> - ARGS attrbits: bitmap attrvals DESCRIPTION Procedure SETATTR changes one or more of the attributes of a file system object on the server. The new attributes are specified with a bitmap and the attributes that follow the bitmap in bit order. IMPLEMENTATION The file size attribute is used to request changes to the size of a file. A value of 0 causes the file to be truncated, a value less than the current size of the file causes data from new size to the end of the file to be discarded, and a size greater than the current size of the file causes logically zeroed data bytes to be added to the end of the file. Servers are free to implement this using holes or actual zero data bytes. Clients should not make any assumptions regarding a server's implementation of this feature, beyond that the bytes returned will be zeroed. Servers must support extending the file size via SETATTR. SETATTR is not guaranteed atomic. A failed SETATTR may partially change a file's attributes. Changing the size of a file with SETATTR indirectly changes the mtime. A client must account for this as size changes can result in data deletion. If server and client times differ, programs that compare client time to file times can break. A time maintenance protocol should be used to limit client/server time skew. If the server cannot successfully set all the attributes it must return an NFS4ERR_INVAL error. An error may be returned if the server can not store a uid or gid in its own representation of uids or gids, respectively. If the server can only support 32 bit offsets and sizes, a SETATTR request to set the size of a file to Expires: February 1999 [Page 69] Strawman NFS version 4 August 1998 larger than can be represented in 32 bits will be rejected with this same error. ERRORS NFS4ERR_PERM NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_INVAL NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_DQUOT NFS4ERR_SERVERFAULT Expires: February 1999 [Page 70] Strawman NFS version 4 August 1998 9.26. Procedure 25: VERIFY - Verify attributes same SYNOPSIS (cfh), attrbits, attrvals -> - ARGS attrbits: bitmap attrvals RESULTS (none) DESCRIPTION This operation is used to verify that attributes have a value assumed by the client before proceeding with following operations in the compound request. For instance, a VERIFY can be used to make sure that the file size has not changed for an append-mode write: 1. PUTFH 0x0123456 2. VERIFY attrbits attrs 3. WRITE 450328 4096 If the attributes are not as expected, then the request fails and the data is not appended to the file. IMPLEMENTATION ERRORS Expires: February 1999 [Page 71] Strawman NFS version 4 August 1998 9.27. Procedure 26: WRITE - Write to file SYNOPSIS (cfh), offset, count, stability, data -> count, committed, verifier ARGS offset: uint64 count: uint32 stability: uint32 data: opaque RESULTS count: uint32 committed: uint32 verifier: uint32 DESCRIPTION Write data to the file identified by the current filehandle. Arguments are as follows: offset The position within the file at which the write is to begin. An offset of 0 means to write data starting at the beginning of the file. count The number of bytes of data to be written. If count is 0, the WRITE will succeed and return a count of 0, barring errors due to permissions checking. The size of data must be less than or equal to the value of the wtmax attribute for the filesystem that contains file. If greater, the server may write only wtmax bytes, resulting in a short write. stability Expires: February 1999 [Page 72] Strawman NFS version 4 August 1998 If stable is FILE_SYNC, the server must commit the data written plus all file system metadata to stable storage before returning results. This corresponds to the NFS version 2 protocol semantics. Any other behavior constitutes a protocol violation. If stable is DATA_SYNC, then the server must commit all of the data to stable storage and enough of the metadata to retrieve the data before returning. The server implementor is free to implement DATA_SYNC in the same fashion as FILE_SYNC, but with a possible performance drop. If stable is UNSTABLE, the server is free to commit any part of the data and the metadata to stable storage, including all or none, before returning a reply to the client. There is no guarantee whether or when any uncommitted data will subsequently be committed to stable storage. The only guarantees made by the server are that it will not destroy any data without changing the value of verf and that it will not commit the data and metadata at a level less than that requested by the client. data The data to be written to the file. If the operation is successful the following results are returned: count The number of bytes of data written to the file. The server may write fewer bytes than requested. If so, the actual number of bytes written starting at location, offset, is returned. committed The server should return an indication of the level of commitment of the data and metadata via committed. If the server committed all data and metadata to stable storage, committed should be set to FILE_SYNC. If the level of commitment was at least as strong as DATA_SYNC, then committed should be set to DATA_SYNC. Otherwise, committed must be returned as UNSTABLE. If stable was FILE_SYNC, then committed must also be FILE_SYNC: anything else constitutes a protocol violation. If stable was DATA_SYNC, then committed may be FILE_SYNC or DATA_SYNC: anything else constitutes a protocol violation. If stable was UNSTABLE, then committed may be either FILE_SYNC, DATA_SYNC, or UNSTABLE. verifier Expires: February 1999 [Page 73] Strawman NFS version 4 August 1998 This is a cookie that the client can use to determine whether the server has changed state between a call to WRITE and a subsequent call to either WRITE or COMMIT. This cookie must be consistent during a single instance of the NFS version 4 protocol service and must be unique between instances of the NFS version 4 protocol server, where uncommitted data may be lost. If a client writes data to the server with the stable argument set to UNSTABLE and the reply yields a committed response of DATA_SYNC or UNSTABLE, the client will follow up some time in the future with a COMMIT operation to synchronize outstanding asynchronous data and metadata with the server's stable storage, barring client error. It is possible that due to client crash or other error that a subsequent COMMIT will not be received by the server. IMPLEMENTATION It is possible for the server to write fewer than count bytes of data. In this case, the server should not return an error unless no data was written at all. If the server writes less than count bytes, the client should issue another WRITE to write the remaining data. It is assumed that the act of writing data to a file will cause the mtime of the file to be updated. However, the mtime of the file should not be changed unless the contents of the file are changed. Thus, a WRITE request with count set to 0 should not cause the mtime of the file to be updated. The definition of stable storage has been historically a point of contention. The following expected properties of stable storage may help in resolving design issues in the implementation. Stable storage is persistent storage that survives: 1. Repeated power failures. 2. Hardware failures (of any board, power supply, etc.). 3. Repeated software crashes, including reboot cycle. This definition does not address failure of the stable storage module itself. The verifier, is defined to allow a client to detect different instances of an NFS version 4 protocol server over which cached, uncommitted data may be lost. In the most likely case, the verifier allows the client to detect server reboots. This Expires: February 1999 [Page 74] Strawman NFS version 4 August 1998 information is required so that the client can safely determine whether the server could have lost cached data. If the server fails unexpectedly and the client has uncommitted data from previous WRITE requests (done with the stable argument set to UNSTABLE and in which the result committed was returned as UNSTABLE as well) it may not have flushed cached data to stable storage. The burden of recovery is on the client and the client will need to retransmit the data to the server. A suggested verifier would be to use the time that the server was booted or the time the server was last started (if restarting the server without a reboot results in lost buffers). The committed field in the results allows the client to do more effective caching. If the server is committing all WRITE requests to stable storage, then it should return with committed set to FILE_SYNC, regardless of the value of the stable field in the arguments. A server that uses an NVRAM accelerator may choose to implement this policy. The client can use this to increase the effectiveness of the cache by discarding cached data that has already been committed on the server. Some implementations may return NFS4ERR_NOSPC instead of NFS4ERR_DQUOT when a user's quota is exceeded. ERRORS NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_FBIG NFS4ERR_DQUOT NFS4ERR_NOSPC NFS4ERR_ROFS NFS4ERR_INVAL NFS4ERR_LOCKED NFS4ERR_SERVERFAULT Expires: February 1999 [Page 75] Strawman NFS version 4 August 1998 9.28. Procedure 27: SECINFO - Obtain Available Security SYNOPSIS (cfh), filename -> { secinfo } ARGS filename: utf8string RESULTS secinfo: secinfo This is a link list of security flavors available for the supplied file handle and filename. DESCRIPTION This procedure is used by the client to obtain a list of valid RPC authentication flavors for a specific file handle, file name pair. For the flavors, AUTH_NONE, AUTH_SYS, AUTH_DH, and AUTH_KRB4 no additional security information is returned. For a return value of AUTH_RPCSEC_GSS, a security triple is returned that contains the mechanism object id (as defined in [RFC2078]), the quality of protection (as defined in [RFC 2078]) and the service type (as defined in [RFC2203]). It is possible for SECINFO to return multiple entries with flavor equal to AUTH_RPCSEC_GSS with different security triple values. IMPLEMENTATION This procedure is expected to be used by the NFS client when the error value of NFS4ERR_WRONGSEC is returned from another NFS procedure. This signifies to the client that the server's security policy is different from what the client is currently using. At this point, the client is expected to obtain a list of possible security flavors and choose what best suits its policies. ERRORS NFS4ERR_NOENT NFS4ERR_IO NFS4ERR_ACCES NFS4ERR_NAMETOOLONG Expires: February 1999 [Page 76] Strawman NFS version 4 August 1998 NFS4ERR_STALE NFS4ERR_SERVERFAULT NFS4ERR_FHEXPIRED NFS4ERR_WRONGSEC Expires: February 1999 [Page 77] Strawman NFS version 4 August 1998 10. Locking notes 10.1. Short and long leases The usual lease trade-offs apply: short leases are good for fast server recovery at a cost of increased LOCKX requests, though this may not be a factor if we can take advantage of compound requests to piggyback LOCKX on normal read and write requests. If the client is not actively doing I/O, perhaps a user editing a locked file, then the lOCKX requests become more obvious. Longer leases are certainly kinder and gentler to large internet servers trying to handle huge numbers of clients. LOCKX requests drop in direct proportion to the lease time. The disadvantages of long leases are slower server recover after crash (server must wait for leases to expire and grace period before granting new lock requests) and increased file contention (if client fails to transmit an unlock request then server must wait for lease expiry before granting new locks). Assuming that locks are held for very short periods (msec), that unlock requests usually get through, that there is usually very little lock contention, I'd recommend long leases to keep LOCKX requests to a minimum, i.e. leases of one or two minutes. This seems appropriate for an Internet scale - and no problem on Intranets. 10.2. Clocks and leases To avoid the need for synchronized clocks, lease times are granted by the server as a time delta, though there is a requirement that the client and server clocks do not drift exessively over the duration of the lock. There is also the issue of propagation delay across the network which could easily be several hundred milliseconds across the Internet as well as the possibility that requests will be lost and need to be retransmitted. To take propagation delay into account, the client should subtract a it from lease times, e.g. if if the client estimates the one-way propagation delay as 200 msec, then it can assume that the lease is already 200 msec old when it gets it. In addition, it'll take another 200 msec to get a response back to the server. So the client must send a lock renewal or write data back to the server 400 msec before the lease would expire. The client could measure propagation delay with reasonable accuracy by measuring the round-trip time for lock extensions assuming that there's not much server processing overhead in an extension. Expires: February 1999 [Page 78] Strawman NFS version 4 August 1998 10.3. Locks and lease times Lock requests do not contain desired lease times. The server allocates leases with no information from the client. The assumption here is that the client really has no idea of just how long the lock will be required. If a scenario can be found where a hint from the client as to the maximum lease time desired would be useful, then this feature could be added to lock requests. 10.4. Lease scalability Read locks will be vastly more popular than write locks and will be popular for client caching. A server might easily have a hundred thousand concurrent read locks on a single file. The server doesn't need to store individual lease times for each client - only the longest lease associated with each locked file. 10.5. Rejecting write locks and denial of service Unrestricted use of locking could certainly asking for denial of service attacks. There's an implicit assumption here that attempts to set write locks will be rejected if the client does not have write permission for the file. Similarly for read locks if the client has no read permission. 10.6. Locking of directories and other meta-files A question: should directories and/or other filesystem objects like symbolic links be lockable ? Clients will want to cache whole directories. It would be nice to have consistent directory caches, but it would require that any client creating a new file get a write lock on the directory and be prepared to handle lock denial. Is the weak cache consistency that we currently have for directories acceptable ? I think perhaps it is - given the expense of doing full consistency on an Internet scale. 10.7. Proxy servers and leases Proxy servers. There is some interest in having NFS V4 support caching proxies. Support for proxy caching is a requirement if servers are to handle large numbers of clients - clients that may have little or no ability to cache on their own. How could proxy servers use lease-based locking ? 10.8. Archive updates and lease time adjustment Regularly-updated archives. It is common for FTP and HTTP servers on the Internet to be updated at regularly scheduled intervals, e.g. on Expires: February 1999 [Page 79] Strawman NFS version 4 August 1998 the hour, daily, or weekly. These servers could grant extremely long leases that get progressively shorter as the update time draws near. Clients get to cache efficiently, network and server load is vastly reduced, and new data is available as soon as it is updated. The lease times might be randomly skewed across clients to spread the update load. These servers may choose to assign a blanket lease time for the entire server or for an entire filesystem. 10.9. Locking and the new latency Latency caused by locking. If a client wants to update a file then it will have to wait until the leases on read locks have expired. If the leases are of the order of 60 seconds or several minutes then the client (and end-user) may be blocked for a while. This is unfamiliar for current NFS users who are not bothered by mandatory locking - but it could be an issue if we decide we like the caching benefits. A similar problem exists for clients that wish to read a file that is write locked. The read-lock case is likely to be more common if read-locking is used to protect cached data on the client. Expires: February 1999 [Page 80] Strawman NFS version 4 August 1998 11. NFS Version 4 RPC definition file /* * nfs_prot.x * */ %#pragma ident "@(#)nfs_prot.x 1.24 98/08/06" /* * Sizes */ const NFS4_FHSIZE = 128; const NFS4_CREATEVERFSIZE = 8; /* * Timeval */ struct nfstime4 { int64_t seconds; uint32_t nseconds; }; struct specdata4 { uint32_t specdata1; uint32_t specdata2; }; /* * Basic data types */ typedef opaque utf8string<>; typedef uint64_t offset4; typedef uint32_t count4; typedef uint32_t length4; typedef uint32_t writeverf4; typedef opaque createverf4[NFS4_CREATEVERFSIZE]; typedef utf8string filename4; typedef uint64_t nfs_lockid4; typedef uint32_t nfs_lease4; typedef uint32_t nfs_lockstate4; typedef uint64_t nfs_cookie4; typedef utf8string linktext4; typedef opaque sec_oid4<>; typedef uint32_t qop4; typedef uint32_t fattr4_type; typedef uint32_t fattr4_mode; Expires: February 1999 [Page 81] Strawman NFS version 4 August 1998 typedef uint32_t fattr4_accessbits; typedef uint32_t fattr4_nlink; typedef utf8string fattr4_uid; typedef utf8string fattr4_gid; typedef uint64_t fattr4_size; typedef uint64_t fattr4_used; typedef specdata4 fattr4_rdev; typedef uint64_t fattr4_fsid; typedef uint64_t fattr4_fileid; typedef nfstime4 fattr4_atime; typedef nfstime4 fattr4_mtime; typedef nfstime4 fattr4_ctime; typedef uint32_t fattr4_rtmax; typedef uint32_t fattr4_rtpref; typedef uint32_t fattr4_rtmult; typedef uint32_t fattr4_wtmax; typedef uint32_t fattr4_wtpref; typedef uint32_t fattr4_wtmult; typedef uint32_t fattr4_dtpref; typedef uint64_t fattr4_maxfilesize; typedef uint64_t fattr4_change; typedef nfstime4 fattr4_time_delta; typedef uint32_t fattr4_properties; typedef uint32_t fattr4_linkmax; typedef uint32_t fattr4_name_max; /* * Error status */ enum nfsstat4 { NFS4_OK = 0, NFS4ERR_PERM = 1, NFS4ERR_NOENT = 2, NFS4ERR_IO = 5, NFS4ERR_NXIO = 6, NFS4ERR_ACCES = 13, NFS4ERR_EXIST = 17, NFS4ERR_XDEV = 18, NFS4ERR_NODEV = 19, NFS4ERR_NOTDIR = 20, NFS4ERR_ISDIR = 21, NFS4ERR_INVAL = 22, NFS4ERR_FBIG = 27, NFS4ERR_NOSPC = 28, NFS4ERR_ROFS = 30, NFS4ERR_MLINK = 31, NFS4ERR_NAMETOOLONG = 63, NFS4ERR_NOTEMPTY = 66, Expires: February 1999 [Page 82] Strawman NFS version 4 August 1998 NFS4ERR_DQUOT = 69, NFS4ERR_STALE = 70, NFS4ERR_BADHANDLE = 10001, NFS4ERR_NOT_SYNC = 10002, NFS4ERR_BAD_COOKIE = 10003, NFS4ERR_NOTSUPP = 10004, NFS4ERR_TOOSMALL = 10005, NFS4ERR_SERVERFAULT = 10006, NFS4ERR_BADTYPE = 10007, NFS4ERR_JUKEBOX = 10008, NFS4ERR_SAME = 10009, NFS4ERR_DENIED = 10010, NFS4ERR_EXPIRED = 10011, NFS4ERR_LOCKED = 10012 }; enum rpc_flavor4 { AUTH_NONE = 0, AUTH_SYS = 1, AUTH_DH = 2, AUTH_KRB4 = 3, AUTH_RPCSEC_GSS = 4 }; /* * From RFC 2203 */ enum rpc_gss_svc_t { RPC_GSS_SVC_NONE = 1, RPC_GSS_SVC_INTEGRITY = 2, RPC_GSS_SVC_PRIVACY = 3 }; /* * LOCKX lock type */ enum lockx_locktype { READLOCK = 1, WRITELOCK = 2 }; /* * File access handle */ struct nfs_fh4 { opaque data; }; Expires: February 1999 [Page 83] Strawman NFS version 4 August 1998 /* * File types */ enum ftype4 { NF4REG = 1, NF4DIR = 2, NF4BLK = 3, NF4CHR = 4, NF4LNK = 5, NF4SOCK = 6, NF4FIFO = 7 }; const FATTR4_TYPE = 1; const FATTR4_MODE = 2; const FATTR4_ACCESSBITS = 3; const FATTR4_NLINK = 4; const FATTR4_UID = 5; const FATTR4_GID = 6; const FATTR4_SIZE = 7; const FATTR4_USED = 8; const FATTR4_RDEV = 9; const FATTR4_FSID = 10; const FATTR4_FILEID = 11; const FATTR4_ATIME = 12; const FATTR4_MTIME = 13; const FATTR4_CTIME = 14; const FATTR4_RTMAX = 15; const FATTR4_RTPREF = 16; const FATTR4_RTMULT = 17; const FATTR4_WTMAX = 18; const FATTR4_WTPREF = 19; const FATTR4_WTMULT = 20; const FATTR4_DTPREF = 21; const FATTR4_MAXFILESIZE = 22; const FATTR4_TIME_DELTA = 23; const FATTR4_PROPERTIES = 24; const FATTR4_LINKMAX = 25; const FATTR4_NAME_MAX = 26; const FATTR4_NO_TRUNC = 27; const FATTR4_CHOWN_RESTRICTED = 28; const FATTR4_CASE_INSENSITIVE = 29; const FATTR4_CASE_PRESERVING = 30; /* * fattr4_properties bits */ const FSF_LINK = 0x00000001; Expires: February 1999 [Page 84] Strawman NFS version 4 August 1998 const FSF_SYMLINK = 0x00000002; const FSF_HOMOGENEOUS = 0x00000004; const FSF_CANSETTIME = 0x00000008; const FSF_NOTRUNC = 0x00000010; const FSF_CHOWN_RESTRICTED = 0x00000020; const FSF_CASE_INSENSITIVE = 0x00000040; const FSF_CASE_PRESERVING = 0x00000080; struct bitmap4 { uint32_t bits<>; }; struct attrlist { opaque attrs<>; }; struct fattr4 { bitmap4 attrmask; attrlist attr_vals; }; /* * ACCESS: Check access permission */ const ACCESS4_READ = 0x0001; const ACCESS4_LOOKUP = 0x0002; const ACCESS4_MODIFY = 0x0004; const ACCESS4_EXTEND = 0x0008; const ACCESS4_DELETE = 0x0010; const ACCESS4_EXECUTE = 0x0020; struct ACCESS4args { uint32_t access; }; struct ACCESS4resok { uint32_t access; }; union ACCESS4res switch (nfsstat4 status) { case NFS4_OK: ACCESS4resok resok; default: void; }; /* * COMMIT: Commit cached data on server to stable storage Expires: February 1999 [Page 85] Strawman NFS version 4 August 1998 */ struct COMMIT4args { offset4 offset; count4 count; }; struct COMMIT4resok { writeverf4 verf; }; union COMMIT4res switch (nfsstat4 status) { case NFS4_OK: COMMIT4resok resok; default: void; }; /* * CREATE: Create a file */ enum createmode4 { UNCHECKED = 0, GUARDED = 1, EXCLUSIVE = 2 }; union createhow4 switch (createmode4 mode) { case UNCHECKED: case GUARDED: fattr4 createattrs; case EXCLUSIVE: createverf4 verf; }; struct CREATE4args { filename4 name; ftype4 objtype; createhow4 how; }; union CREATE3res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; Expires: February 1999 [Page 86] Strawman NFS version 4 August 1998 /* * GETATTR: Get file attributes */ struct GETATTR4args { bitmap4 attr_request; }; struct GETATTR4resok { fattr4 obj_attributes; }; union GETATTR4res switch (nfsstat4 status) { case NFS4_OK: GETATTR4resok resok; default: void; }; /* * GETFH: Get current filehandle */ struct GETFH4resok { nfs_fh4 object; }; union GETFH4res switch (nfsstat4 status) { case NFS4_OK: GETFH4resok resok; default: void; }; /* * LINK: Create link to an object */ struct LINK4args { nfs_fh4 dir; filename4 newname; }; union LINK4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* Expires: February 1999 [Page 87] Strawman NFS version 4 August 1998 * LOCKR: Create a read lock on a file */ struct LOCKR4args { nfs_lockid4 id; offset4 offset; length4 length; }; struct LOCKR4resok { nfs_lease4 lease; }; union LOCKR4res switch (nfsstat4 status) { case NFS4_OK: LOCKR4resok resok; default: void; }; /* * LOCKW: Create a write lock */ struct LOCKW4args { nfs_lockid4 id; offset4 offset; length4 length; }; struct LOCKW4resok { nfs_lease4 lease; }; union LOCKW4res switch (nfsstat4 status) { case NFS4_OK: LOCKW4resok resok; default: void; }; /* * LOCKT: Test for lock */ struct LOCKT4args { offset4 offset; length4 length; }; Expires: February 1999 [Page 88] Strawman NFS version 4 August 1998 struct LOCKT4resok { nfs_lockstate4 lease; }; union LOCKT4res switch (nfsstat4 status) { case NFS4_OK: LOCKT4resok resok; default: void; }; /* * LOCKX: validate and extend lock */ struct LOCKX4args { nfs_lockid4 id; offset4 offset; length4 length; lockx_locktype locktype; }; struct LOCKX4resok { nfs_lockstate4 lease; }; union LOCKX4res switch (nfsstat4 status) { case NFS4_OK: LOCKX4resok resok; default: void; }; /* * LOCKU: Unlock file */ struct LOCKU4args { nfs_lockid4 id; offset4 offset; length4 length; }; union LOCKU4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; Expires: February 1999 [Page 89] Strawman NFS version 4 August 1998 /* * LOOKUP: Lookup filename */ struct LOOKUP4args { filename4 filenames<>; }; union LOOKUP4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * LOOKUPP: Lookup parent directory */ union LOOKUPP4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * NVERIFY: Verify attributes different */ struct NVERIFY4args { bitmap4 attr_request; fattr4 obj_attributes; }; union NVERIFY4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * RESTOREFH: Restore saved filehandle */ union RESTOREFH4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; Expires: February 1999 [Page 90] Strawman NFS version 4 August 1998 }; /* * SAVEFH: Save current filehandle */ union SAVEFH4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * PUTFH: Set current filehandle */ struct PUTFH4args { nfs_fh4 object; }; union PUTFH4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * PUTROOTFH: Set root filehandle */ union PUTROOTFH4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * READ: Read from file */ struct READ4args { offset4 offset; count4 count; }; struct READ4resok { bool eof; opaque data<>; }; Expires: February 1999 [Page 91] Strawman NFS version 4 August 1998 union READ4res switch (nfsstat4 status) { case NFS4_OK: READ4resok resok; default: void; }; /* * READDIR: Read directory */ struct READDIR4args { nfs_cookie4 cookie; count4 dircount; count4 maxcount; bitmap4 attr_request; }; struct entry4 { cookie4 cookie; filename4 name; fattr4 attrs; entry4 *nextentry; }; struct dirlist4 { entry4 *entries; bool eof; }; struct READDIR4resok { dirlist4 reply; }; union READDIR4res switch (nfsstat4 status) { case NFS4_OK: READDIR4resok resok; default: void; }; /* * READLINK: Read symbolic link */ struct READLINK4resok { linktext4 link; Expires: February 1999 [Page 92] Strawman NFS version 4 August 1998 }; union READLINK4res switch (nfsstat4 status) { case NFS4_OK: READLINK4resok resok; default: void; }; /* * REMOVE: Remove filesystem object */ struct REMOVE4args { filename4 target; }; union REMOVE4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * RENAME: Rename directory entry */ struct RENAME4args { filename4 oldname; nfs_fh4 newdir; filename4 newname; }; union RENAME4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * SETATTR: Set attributes */ struct SETATTR4args { fattr4 obj_attributes; }; union SETATTR4res switch (nfsstat4 status) { case NFS4_OK: Expires: February 1999 [Page 93] Strawman NFS version 4 August 1998 void; default: void; }; /* * VERIFY: Verify attributes same */ struct VERIFY4args { bitmap4 attr_request; fattr4 obj_attributes; }; union VERIFY4res switch (nfsstat4 status) { case NFS4_OK: void; default: void; }; /* * WRITE: Write to file */ enum stable_how4 { UNSTABLE = 0, DATA_SYNC = 1, FILE_SYNC = 2 }; struct WRITE4args { offset4 offset; count4 count; stable_how4 stable; opaque data<>; }; struct WRITE4resok { count4 count; stable_how4 committed; writeverf4 verf; }; union WRITE4res switch (nfsstat4 status) { case NFS4_OK: WRITE4resok resok; default: void; }; Expires: February 1999 [Page 94] Strawman NFS version 4 August 1998 /* * SECINFO: Obtain Available Security Mechanisms */ struct SECINFO4args { filename4 name; }; struct rpc_flavor_info { secoid4 oid; qop4 qop; rpc_gss_svc_t service; }; struct secinfo4 { rpc_flavor4 flavor; rpc_flavor_info *flavor_info; secinfo4 *nextentry; }; struct SECINFO4resok { secinfo4 reply; }; union SECINFO4res switch (nfsstat4 status) { case NFS4_OK: SECINFO4resok resok; default: void; }; enum opcode { OP_NULL = 0, OP_ACCESS = 1, OP_COMMIT = 2, OP_CREATE = 3, OP_GETATTR = 4, OP_GETFH = 5, OP_LINK = 6, OP_LOCKR = 7, OP_LOCKW = 8, OP_LOCKT = 9, OP_LOCKX = 10, OP_LOCKU = 11, OP_LOOKUP = 12, OP_LOOKUPP = 13, OP_NVERIFY = 14, OP_RESTOREFH = 15, Expires: February 1999 [Page 95] Strawman NFS version 4 August 1998 OP_SAVEFH = 16, OP_PUTFH = 17, OP_PUTROOTFH = 18, OP_READ = 19, OP_READDIR = 20, OP_READLINK = 21, OP_REMOVE = 22, OP_RENAME = 23, OP_SETATTR = 24, OP_VERIFY = 25, OP_WRITE = 26, OP_SECINFO = 27 }; union opunion switch (unsigned opcode) { case OP_NULL: void; case OP_ACCESS: ACCESS4args opaccess; case OP_COMMIT: COMMIT4args opcommit; case OP_CREATE: CREATE4args opcreate; case OP_GETATTR: GETATTR4args opgettattr; case OP_GETFH: void; case OP_LINK: LINK4args oplink; case OP_LOCKR: LOCKR4args oplockr; case OP_LOCKW: LOCKW4args oplockw; case OP_LOCKT: LOCKT4args oplockt; case OP_LOCKX: LOCKX4args oplockx; case OP_LOCKU: LOCKU4args oplocku; case OP_LOOKUP: LOOKUP4args oplookup; case OP_LOOKUPP: void; case OP_NVERIFY: NVERIFY4args opnverify; case OP_RESTOREFH: void; case OP_SAVEFH: void; case OP_PUTFH: PUTFH4args opputfh; case OP_PUTROOTFH: void; case OP_READ: READ4args opread; case OP_READDIR: READDIR4args opreaddir; case OP_READLINK: void; case OP_REMOVE: REMOVE4args opremove; case OP_RENAME: RENAME4args oprename; case OP_SETATTR: SETATTR4args opsetattr; case OP_VERIFY: VERIFY4args opverify; case OP_WRITE: WRITE4args opwrite; case OP_SECINFO: SECINFO4args opsecinfo; }; struct op { opunion ops; }; Expires: February 1999 [Page 96] Strawman NFS version 4 August 1998 union resultdata switch (unsigned resop){ case OP_NULL: void; case OP_ACESS: ACCESS4res op; case OP_COMMIT: COMMIT4res opcommit; case OP_CREATE: CREATE4res opcreate; case OP_GETATTR: GETATTR4res opgetattr; case OP_GETFH: GETFH4res opgetfh; case OP_LINK: LINK4res oplink; case OP_LOCKR: LOCKR4res oplockr; case OP_LOCKW: LOCKW4res oplockw; case OP_LOCKT: LOCKT4res oplockt; case OP_LOCKX: LOCKX4res oplockx; case OP_LOCKU: LOCKU4res oplocku; case OP_LOOKUP: LOOKUP4res oplookup; case OP_LOOKUPP: LOOKUPP4res oplookupp; case OP_NVERIFY: NVERIFY4res opnverify; case OP_RESTOREFH: RESTOREFH4res oprestorefh; case OP_SAVEFH: SAVEFH4res opsavefh; case OP_PUTFH: PUTFH4res opputfh; case OP_PUTROOTFH: PUTROOTFH4res opputrootfh; case OP_READ: READ4res opread; case OP_READDIR: READDIR4res opreaddir; case OP_READLINK: READLINK4res opreadlink; case OP_REMOVE: REMOVE4res opremove; case OP_RENAME: RENAME4res oprename; case OP_SETATTR: SETATTR4res opsetattr; case OP_VERIFY: VERIFY4res opverify; case OP_WRITE: WRITE4res opwrite; case OP_SECINFO: SECINFO4res opsecinfo; }; struct COMPOUND4args { utf8string tag; op oplist<>; }; struct COMPOUND4resok { utf8string tag; resultdata data<>; }; union COMPOUND4res switch (nfsstat4 status){ case NFS4_OK: COMPOUND4resok resok; default: void; }; Expires: February 1999 [Page 97] Strawman NFS version 4 August 1998 /* * Remote file service routines */ program NFS4_PROGRAM { version NFS_V4 { void NFSPROC4_NULL(void) = 0; COMPOUND4res NFSPROC4_COMPOUND(COMPOUND4args) = 1; } = 4; } = 100003; Expires: February 1999 [Page 98] Strawman NFS version 4 August 1998 12. Bibliography [Gray] C. Gray, D. Cheriton, "Leases: An Efficient Fault-Tolerant Mechanism for Distributed File Cache Consistency," Proceedings of the Twelfth Symposium on Operating Systems Principles, p. 202-210, December 1989. [Juszczak] Juszczak, Chet, "Improving the Performance and Correctness of an NFS Server," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, June 1990, pages 53-63. Describes reply cache implementation that avoids work in the server by handling duplicate requests. More important, though listed as a side-effect, the reply cache aids in the avoidance of destructive non-idempotent operation re-application -- improving correctness. [Kazar] Kazar, Michael Leon, "Synchronization and Caching Issues in the Andrew File System," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, Dallas Winter 1988, pages 27-36. A description of the cache consistency scheme in AFS. Contrasted with other distributed file systems. [Macklem] Macklem, Rick, "Lessons Learned Tuning the 4.3BSD Reno Implementation of the NFS Protocol," Winter USENIX Conference Proceedings, USENIX Association, Berkeley, CA, January 1991. Describes performance work in tuning the 4.3BSD Reno NFS implementation. Describes performance improvement (reduced CPU loading) through elimination of data copies. [Mogul] Mogul, Jeffrey C., "A Recovery Protocol for Spritely NFS," USENIX File System Workshop Proceedings, Ann Arbor, MI, USENIX Association, Berkeley, CA, May 1992. Second paper on Spritely NFS proposes a lease-based scheme for recovering state of consistency protocol. [Nowicki] Nowicki, Bill, "Transport Issues in the Network File System," ACM SIGCOMM newsletter Computer Communication Review, April 1989. A brief description of the basis for the dynamic retransmission work. Expires: February 1999 [Page 99] Strawman NFS version 4 August 1998 [Pawlowski] Pawlowski, Brian, Ron Hixon, Mark Stein, Joseph Tumminaro, "Network Computing in the UNIX and IBM Mainframe Environment," Uniforum `89 Conf. Proc., (1989) Description of an NFS server implementation for IBM's MVS operating system. [RFC1094] Sun Microsystems, Inc., "NFS: Network File System Protocol Specification", RFC1094, March 1989. ftp://ftp.isi.edu/in-notes/rfc1094.txt [RFC1813] Callaghan, B., Pawlowski, B., Staubach, P., "NFS Version 3 Protocol Specification", RFC1813, Sun Microsystems, Inc., June 1995. ftp://ftp.isi.edu/in-notes/rfc1813.txt [RFC1831] Srinivasan, R., "RPC: Remote Procedure Call Protocol Specification Version 2", RFC1831, Sun Microsystems, Inc., August 1995. ftp://ftp.isi.edu/in-notes/rfc1831.txt [RFC1832] Srinivasan, R., "XDR: External Data Representation Standard", RFC1832, Sun Microsystems, Inc., August 1995. ftp://ftp.isi.edu/in-notes/rfc1832.txt [RFC1833] Srinivasan, R., "Binding Protocols for ONC RPC Version 2", RFC1833, Sun Microsystems, Inc., August 1995. ftp://ftp.isi.edu/in-notes/rfc1833.txt [RFC2078] Linn, J., "Generic Security Service Application Program Interface, Version 2", RFC2078, OpenVision Technologies, January 1997. ftp://ftp.isi.edu/in-notes/rfc2078.txt Expires: February 1999 [Page 100] Strawman NFS version 4 August 1998 [RFC2203] Eisler, M., Chiu, A., Ling, L., "RPCSEC_GSS Protocol Specification" RFC2203, Sun Microsystems, Inc., August 1995. ftp://ftp.isi.edu/in-notes/rfc2203.txt [Sandberg] Sandberg, R., D. Goldberg, S. Kleiman, D. Walsh, B. Lyon, "Design and Implementation of the Sun Network Filesystem," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, Summer 1985. The basic paper describing the SunOS implementation of the NFS version 2 protocol, and discusses the goals, protocol specification and trade- offs. [SPNEGO] Baize, E., Pinkas, D., "The Simple and Protected GSS-API Negotiation Mechanism", draft-ietf-cat-snego-09.txt, Bull, April 1998. ftp://ftp.isi.edu/internet-drafts/draft-ietf-cat-snego-09.txt [Srinivasan] Srinivasan, V., Jeffrey C. Mogul, "Spritely NFS: Implementation and Performance of Cache Consistency Protocols", WRL Research Report 89/5, Digital Equipment Corporation Western Research Laboratory, 100 Hamilton Ave., Palo Alto, CA, 94301, May 1989. This paper analyzes the effect of applying a Sprite-like consistency protocol applied to standard NFS. The issues of recovery in a stateful environment are covered in [Mogul]. [X/OpenNFS] X/Open Company, Ltd., X/Open CAE Specification: Protocols for X/Open Internetworking: XNFS, X/Open Company, Ltd., Apex Plaza, Forbury Road, Reading Berkshire, RG1 1AX, United Kingdom, 1991. This is an indispensable reference for NFS version 2 protocol and accompanying protocols, including the Lock Manager and the Portmapper. [X/OpenPCNFS] X/Open Company, Ltd., X/Open CAE Specification: Protocols for X/Open Internetworking: (PC)NFS, Developer's Specification, X/Open Company, Ltd., Apex Plaza, Forbury Road, Reading Berkshire, RG1 1AX, United Kingdom, 1991. This is an indispensable reference for NFS version 2 protocol and accompanying protocols, including the Lock Manager and the Portmapper. Expires: February 1999 [Page 101] Strawman NFS version 4 August 1998 13. Author's Address Address comments related to this memorandum to: nfsv4-wg@sunroof.eng.sun.com Spencer Shepler Sun Microsystems, Inc. 7808 Moonflower Drive Austin, Texas 78750 Phone: 1-512-349-9376 E-mail: shepler@eng.sun.com Expires: February 1999 [Page 102]