<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!-- One method to get references from the online citation libraries.
     There has to be one entity for each item to be referenced. 
     An alternate method (rfc include) is described in the references. -->

<!ENTITY RFC2119 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.2119.xml">
<!ENTITY RFC8166 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8166.xml">
<!ENTITY RFC8167 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8167.xml">
<!ENTITY RFC8178 SYSTEM "http://xml.resource.org/public/rfc/bibxml/reference.RFC.8178.xml">
]>
<?rfc sortrefs="yes" ?>
<!-- Below are generally applicable Processing Instructions (PIs) that most I-Ds might want to use.
     (Here they are set differently than their defaults in xml2rfc v1.32) -->
<?rfc strict="yes" ?>
<!-- give errors regarding ID-nits and DTD validation -->
<!-- control the table of contents (ToC) -->
<?rfc toc="yes"?>
<!-- generate a ToC -->
<?rfc tocdepth="4"?>
<!-- the number of levels of subsections in ToC. default: 3 -->
<!-- control references -->
<?rfc symrefs="yes"?>
<!-- use symbolic references tags, i.e, [RFC2119] instead of [1] -->
<?rfc sortrefs="yes" ?>
<!-- sort the reference entries alphabetically -->
<!-- control vertical white space 
     (using these PIs as follows is recommended by the RFC Editor) -->
<?rfc compact="yes" ?>
<!-- do not start each main section on a new page -->
<?rfc subcompact="no" ?>
<!-- keep one blank line between list items -->
<!-- end of list of popular I-D processing instructions -->

<rfc category="std" ipr="pre5378Trust200902" docName="draft-dnoveck-nfsv4-nfsulb-01"  xml:lang="en">
  <front>
    <title abbrev="Transport-generic NFS ULBs">Transport-generic Network File System (NFS) Upper Layer Bindings To RPC-Over-RDMA </title>
    <author initials="D." surname="Noveck" fullname="David Noveck">
      <address>
        <postal>
          <street>1601 Trapelo Road</street>
          <city>waltham</city>
          <region>MA</region>
          <code>02451</code>
          <country>Unied States of America</country>
        </postal>
        <phone>+1 781 572 8038</phone>
        <email>davenoveck@gmail.com</email>
      </address>
    </author>
    <date/>
    <area>Transport</area>
    <workgroup>Network File System Version 4</workgroup>
    <keyword>NFS-Over-RDMA</keyword>
    <abstract>
      <t>
        This document specifies Upper Layer Bindings to allow use of 
        RPC-over-RDMA by  protocols related to the Network File System (NFS).
        Such bindings are required when using RPC-over-RDMA, in order to 
        enable use of Direct Data Placement and for a number of other 
        reasons.  These bindings are structured to be applicable to all 
        known version of the RPC-over-RDMA transport, including optional 
        extensions. All versions of NFS are addressed.  
      </t>
    </abstract>
    <note title="Requirements Language">
      <t>
        The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 
        "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in 
        this document are to be interpreted as described in 
        <xref target="RFC2119" />.  
      </t>
    </note>
  </front>
  <middle>
    <section title="Introduction" 
             anchor="INTRO">
      <t>
        An RPC-over-RDMA transport, such as the ones  defined in 
        <xref target="RFC8166"/> and 
        <xref target="rpcrdmav2"/>,
        may employ 
        direct data placement to convey data payloads associated 
        with RPC transactions.  To enable successful interoperation, 
        RPC client and server implementations must agree as to which 
        XDR data items in what particular RPC procedures are eligible 
        for direct data placement (DDP).  Specifying those data items 
        is a major component of a protocol's Upper Layer Binding.
      </t>   
      <t>
        In addition, Upper Layer Bindings are required to include 
        additional information to assure that adequate resources are 
        allocated to receive RPC replies, and for a number of other reasons.
      </t>   
      <t>
        This document contains material required of Upper Layer Bindings, 
        as specified in <xref target="RFC8166"/>, 
        for the NFS protocol versions listed below.  In addition, 
        bindings are provided, when necessary, for auxiliary protocols 
        used together with NFS versions 2 and 3.
      <list style="symbols">
        <t>
          NFS Version 2 <xref target="RFC1094"/>
        </t>
        <t>
          NFS Version 3 <xref target="RFC1813"/> 
        </t>
        <t>
          NFS Version 4.0 <xref target="RFC7530"/> 
        </t>
        <t>
          NFS Version 4.1 <xref target="RFC5661" />
        </t>
        <t>
          NFS Version 4.2 <xref target="RFC7862" />
        </t>
      </list>
      </t>   

    </section>
    <section title="Conveying NFS Operations On RPC-Over-RDMA" 
             anchor="CONV">
      <t>
        This document is written to apply to multiple versions of the
        RPC-over-RDMA transport, only the first of which is currently
        specified by a Sroposed Standard (in 
        <xref target="RFC8166"/>).
        However, it is expected that other versions will be created, and
        this document has been structured to support future versions by
        focusing on the functions to be provided by the transport and the
        transport limitations which the Upper Layer Protocols need to
        accommodate, allowing the transport specification and the 
        specifications for associated extensions to define how
        those functions will be provided and the details of the transport
        limitations.
      </t>
      <t>
        In the subsections that follow, we will describe the generic
        function to be provided or limitation to be accommodated and 
        follow it with material that describes, in general, how that
        issue is dealt with in Version One.  For more detail about
        Version One, <xref target="RFC8166"/> should
        be consulted.  How  these issues are to be dealt with in future
        versions is left to the specification documents for those versions
        and associated documents defining optional extensions.
      </t>
      <t>
        For example:
      <list style="symbols"> 
        <t>
          ULBs within this document define which data items are eligible
          for Direct Data Placement while transport versions might differ as
          how this is to be effected. See Sections 
          <xref target="CONV-rqddp" format="counter" /> and
          <xref target="CONV-rsddp" format="counter" /> for more detail.
          In both cases, <xref target="CONV-sgddp" /> discusses issues
          connected with the use of discontiguous areas for Direct 
          Data Placement.
        </t>
        <t>
          <xref target="CONV-viol" /> defines the concept of a 
          DDP-eligibility
          violation and requires that such violations be reported 
          while transport versions might differ as to the manner in which 
          the reporting is to be done.
        </t>
        <t>
          <xref target="CONV-long" /> discusses issues arising from limits 
          on the size of messages conveyed using RDMA SENDs.  Different
          transport version may have different size limits, while, in the 
          case of replies the ULBs are responsible for specifying how
          limits on reply sizes are to be determined.
        </t>

      </list>
      </t>
      <section title="Direct Placement of Request Data"
               anchor="CONV-rqddp">
        <t>
          When DDP-eligible XDR data items appear in a request 
          the requester needs to take special
          actions in order to provide for the direct placement 
          of those items in
          the responder's memory.   The specific actions to be taken
          are defined by the transport version being used.
        </t>
        <t>
          In Version One, the Read list in each RPC-over-RDMA transport 
          header represents a set of memory regions containing a single
          item of DDP-eligible NFS argument data.  Large data items, 
          such as the data payload of an NFS version 3 WRITE procedure, 
          can be referenced by the Read list.  The NFS server pulls such 
          payloads from the client and places them directly into its 
          own memory.  
        </t>
      </section>
      <section title="Direct Placement of Response Data"
               anchor="CONV-rsddp">
        <t>
          When a request is such that it is possible for DDP-eligible data 
          items to appear in the corresponding reply, the requester needs to
          take special actions in order to provide for the direct 
          placement of those items in the requester's memory, 
          if such placement is desired.   The specific actions to be taken
          are defined by the transport version being used as is the means
          to indicate that such direct placement is not to be done
        </t>
        <t>
          In Version One, the Write list in each RPC-over-RDMA transport 
          header represents a set of memory regions that can receive 
          DDP-eligible NFS result data.  Large data items, such as the 
          payload of an NFS version 3 READ procedure, can be referenced 
          by the Write list.  The NFS server pushes such payloads to the 
          client, placing them directly into the client's memory, using 
          target addresses provided by the client when sending the request.
        </t>
        <t>
          Each Write chunk corresponds to a specific XDR data item in an 
          NFS reply.  This document describes how NFS client and server 
          implementations determine the correspondence between Write 
          chunks and XDR results.  
        </t>
      </section>
      <section title="Scatter-gather when Using DDP"
               anchor="CONV-sgddp">
        <t>
          In order to accommodate the storage of multiple data blocks within
          individual cache buffers, the RPC-over-RDMA transport allows 
          the addresses to which a DDP-eligible data item to be discontiguous.
          How these addresses are indicated depends on the transport version.
        </t>
        <t>
          Within Version One, a chunk typically corresponds to exactly 
          one XDR data item.  Each Read chunk is represented as a list 
          of segments at the same XDR Position.  Each Write chunk is 
          represented as an array of segments.  An NFS client thus has 
          the flexibility to advertise a set of discontiguous memory 
          regions in which to convey a single DDP-eligible XDR data item.  
        </t>
      </section>
      <section title="DDP-eligibility Violations"
               anchor="CONV-viol">
        <t>
          When the transport header uses the means defined to directly
          place on an XDR item is applied to an XDR item not define in the 
          ULB as DDP-eligible, a DDP-eligibility violation is recognized.
          The means by which such violations are to be reported is defined
          by the particular transport version being used.
        </t>
        <t>
          To report a DDP-eligibility violation within Version One, an 
          NFS server returns one of: 
        <list style="symbols">
          <t>
            An RPC-over-RDMA message of type RDMA_ERROR, with the 
            rdma_xid field set to the XID of the matching NFS Call, 
            and the rdma_error field set to ERR_CHUNK
          </t>
          <t>
            An RPC message (via an RDMA_MSG message) with the xid field 
            set to the XID of the matching NFS Call, the mtype field 
            set to REPLY, the stat field set to MSG_ACCEPTED, 
            and the accept_stat field set to GARBAGE_ARGS.  
          </t>
        </list> 
        </t>      
      </section>
      <section title="Long Calls and Replies"
               anchor="CONV-long">
        <t>
          Because of the use of pre-posted receive buffers whose size is
          fixed, all RPC-over-RDMA transport versions have limits on the
          size of messages which can be conveyed without use of explicit 
          RDMA operations, although different transport versions may have 
          different limits.  In particular, when the transport version allows
          messages to be continued across multiple RDMA SENDs, the limit can
          be substantially greater than the receive buffer size.  
          Also note that the size of the messages allowed may be reduced 
          because of space taken 
          up by the transport header fields.
        </t>
        <t>
          Each transport version is responsible for defining the message 
          size limits and the means by which the transfer of messages that 
          exceed these limits is to be provided for.  
          These means may be different 
          in the cases of long calls and replies.
        </t>
        <t>
          When using Version One, if an NFS request is too large to be 
          conveyed within the NFS server's responder inline threshold, 
          even after any DDP-eligible data items have been removed, 
          an NFS client must send the request in the form of a Long Call.  
          The entire NFS request is sent in a special Read chunk called a 
          Position Zero Read chunk.  
        </t>
        <t>
          Also when using Version One, if an NFS client determines that 
          the maximum size of an NFS reply could be too large to be 
          conveyed within its own  inline threshold, it provides a 
          Reply chunk in the RPC-over-RDMA transport header conveying 
          the NFS request.  The server places the entire NFS reply in 
          the Reply chunk.  
        </t>
        <t>
          There exist cases in which an NFS client needs to provide both 
          a Position Zero Read chunk and a Reply chunk for the same RPC.  
          One common source of such situations is when the 
          RPC authentication flavor being used requires that DDP-eligible 
          data items never be removed from RPC messages.
        </t>      
      </section>
    </section>
    <section title="Preparatory Material for Multiple Bindings" 
             anchor="PREP">
      <t>
        Although each of the NFS versions and each of the 
        auxiliary protocols
        discussed  in <xref target="AUX"/> has its own ULB, there is 
        important preparatory material in the subsections below that applies 
        to multiple ULPs.  In particular: 
      <list style="symbols">
        <t> 
          The material in <xref target="PREP-est"/> applies to all of
          the ULPs discussed in this document.
        </t>
        <t> 
          The material in <xref target="PREP-retry"/> applies to NFSv2, NFSv3,
          NFSv4.0, the MOUNT protocol, and the NFSACL protocol.
        </t>
      </list>
      </t>
      <section title="Reply Size Estimation" 
               anchor="PREP-est">
        <t>
          During the construction of each RPC Call message, a client is 
          responsible for allocating appropriate resources for receiving 
          the matching Reply message.  The resources required depends on the 
          maximum reply size expected, whether DDP-eligible can removed
          from the reply and the transport version being used.  The ULB 
          is responsible for defining how the maximum reply size is to be 
          determined while the specifiction of the transport version being 
          used is responsible for defining how this maximum affects 
          the resources to be allocated.  Because the responder may not be
          able to send the required response when these resources have not
          been allocated, reliable reply size estimation is necessary to 
          allow successful interoperation. 
        </t>
        <t>
          In many cases the Upper Layer Protocol's XDR definition provides 
          enough information to enable the client to make a reliable 
          prediction of the maximum size of the expected Reply message.  
          However, If there are variable-size data items in the result, 
          the maximum size of the RPC Reply message can be reliably estimated 
          in many cases: 
        <list style="symbols">
          <t>
            The client requests only a specific portion of an object 
            (for example, using the "count" and "offset" fields in an 
            NFS READ).  
          </t>
          <t>
            The client has already cached the size of the whole object 
            it is about to request (e.g., via a previous NFS GETATTR request). 
          </t>
          <t>
            The client specifies a reply size limit for the particular 
            reply, as it does by setting the count field of READDIR
            request.
          </t>
        </list> 
        </t>
        <t>
          It is sometimes not possible to determine the maximum Reply 
          message size based solely on the above criteria.  Client 
          implementers can choose to provide the largest possible 
          Reply buffer in those cases, based on, for instance, 
          the largest possible NFS READ or WRITE payload (which is 
          negotiated at mount time).  
        </t>
        <t>
          There exist cases in which a client cannot be sure any  
          a priori determination is fully reliable.  Handling of such 
          cases is discussed in <xref target="PREP-retry"/>. 
        </t>
      </section>
      <section title="Retry to Deal with Reply Size Mis-estimation" 
               anchor="PREP-retry">
        <t>
          For some of the protocols discussed in this document, it is 
          possible for a compliant responder to send a valid reply whose
          length exceeds the client's a priori estimate. In such cases, the 
          client needs to expect an error indication that indicates the 
          existence of the oversize reply.   When this happens, the client can 
          either terminate that RPC transaction, or retry it with a 
          larger reply size estimate.  
        </t>
        <t>
          In the case of the NFSv4.0, the use of NFS COMPOUND 
          operations raises the possibility of non-idempotent requests 
          that combine a non-idempotent operation with an operation 
          whose maximum reply size cannot be determined with certainty.
          This makes retrying the operation problematic.  It should
          be noted that many operations normally considered non-idempotent 
          (e.g. WRITE, SETATTR) are actually idempotent.  Truly non-idempotent 
          operations are quite unusual in COMPOUNDs that include operations 
          with uncertain reply sizes.  
        </t>
        <t>
          Depending on the transport version used, the client's choices may be
          restricted as follows:
        <list style="symbols">
          <t>
            The client may be required to treat the error as permanent, with
            retry not allowed.
          </t>
          <t>
            The client may be allowed to reissue the request with a larger
            reply estimate, unless it is a non-idempotent
            request.  In this case, non-idempotent requests may not be retried 
            and will result in errors being reported to the issuer in 
            this case.
          </t>
          <t>
            The client may be allowed to reissue the request with a larger
            reply estimate, in essentially all cases.  In this case, the
            client has sufficient information to avoid re-executing a 
            non-idempotent request and may, if it chooses, retry all
            requests with a larger reply size.
          </t>
        </list>
        </t>
        <t>
          In the case of Version One, the absence of a itinct error code to 
          signal a reply chunk of inadequate size meanss that retry in this 
          situation is not available.
        </t>
      </section>
    </section>

    <section title="Upper Layer Binding for NFS Versions 2 And 3" toc="default">
      <t>This Upper Layer Binding specification applies to NFS Version 2 <xref target="RFC1094" pageno="false" format="default"/> and NFS Version 3 <xref target="RFC1813" pageno="false" format="default"/>.  For brevity, in this section a "legacy NFS client" refers to an NFS client using NFS version 2 or NFS version 3 to communicate with an NFS server.  Likewise, a "legacy NFS server" is an NFS server communicating with clients using NFS version 2 or NFS version 3.  </t>
      <t>The following XDR data items in NFS versions 2 and 3 are DDP-eligible: <list style="symbols"><t>The opaque file data argument in the NFS WRITE procedure </t><t>The pathname argument in the NFS SYMLINK procedure </t><t>The opaque file data result in the NFS READ procedure </t><t>The pathname result in the NFS READLINK procedure </t></list> All other argument or result data items in NFS versions 2 and 3 are not DDP-eligible.  </t>
      <t>A legacy NFS client determines the maximum reply size for each operation using the basic criteria outlined in <xref target="PREP-est"/>.  Such clients deal with reply sizes beyond the maximum as escribed in <xref target="CONV-long"/>. </t>
      <section title="Auxiliary Protocols" anchor="AUX">
        <t>NFS versions 2 and 3 are typically deployed with several other protocols, sometimes referred to as "NFS auxiliary protocols." These are separate RPC programs that define procedures which are not part of the NFS version 2 or version 3 RPC programs.  These include: <list style="symbols"><t>The MOUNT and NLM protocols, introduced in an appendix of <xref target="RFC1813" pageno="false" format="default"/> </t><t>The NSM protocol, described in Chapter 11 of <xref target="NSM" pageno="false" format="default"/> </t><t>The NFSACL protocol, which does not have a public definition (NFSACL here is treated as a de facto standard as there are several interoperating implementations).  </t></list> </t>
        <t>RPC-over-RDMA treats these programs as distinct Upper Layer Protocols <xref target="RFC8166"/>.  To enable the use of these ULPs on an RPC-over-RDMA transport, an Upper Layer Binding specification is provided here for each.  </t>
        <section title="MOUNT, NLM, And NSM Protocols" toc="default">
          <t>Typically MOUNT, NLM, and NSM are conveyed via TCP, even in deployments where NFS operations on RPC-over-RDMA.  When a legacy server supports these programs on RPC-over-RDMA, it advertises the port address via the usual rpcbind service <xref target="RFC1833" pageno="false" format="default"/>.  </t>
          <t>No operation in these protocols conveys a significant data payload, and the size of RPC messages in these protocols is uniformly small.  Therefore, no XDR data items in these protocols are DDP-eligible.  The largest variable-length XDR data item is an xdr_netobj.  In most implementations this data item is not larger than 1024 bytes, making this size a reasonable basis for reply size estimation.  However, since this limit is not specified as part of the protocol, the techniques described in <xref target="PREP-est"/> should be used to deal with situations where these sizes are exceeded.</t>
        </section>
        <section title="NFSACL Protocol" toc="default">
          <t>Legacy clients and servers that support the NFSACL RPC program typically convey NFSACL procedures on the same connection as the NFS RPC program.  This obviates the need for separate rpcbind queries to discover server support for this RPC program.  </t>
          <t>ACLs are typically small, but even large ACLs must be encoded and decoded to some degree.  Thus, no data item in this Upper Layer Protocol is DDP-eligible.  </t>
          <t>For procedures whose replies do not include an ACL object, the size of a reply is determined directly from the NFSACL program's XDR definition.  </t>
          <t>There is no protocol-wide size limit for NFS version 3 ACLs, and there is no mechanism in either the NFSACL or NFS programs for a legacy client to ascertain the largest ACL a legacy server can store.  Legacy client implementations should choose a maximum size for ACLs based on their own internal limits.  A recommended lower bound for this maximum is 32,768 bytes, though a larger Reply chunk (up to the negotiated rsize setting) can be provided.  Since no limit is specified as part of the protocol, the techniques described in <xref target="PREP-est"/> should be used to deal with situations where these recommended bounds are exceeded.</t>
        </section>
      </section>
    </section>
    <section title="Upper Layer Binding for NFS Version 4" toc="default">
      <t>This Upper Layer Binding specification applies to all protocols defined in NFS Version 4.0 <xref target="RFC7530" pageno="false" format="default"/>, NFS Version 4.1 <xref target="RFC5661" pageno="false" format="default"/>, and NFS Version 4.2 <xref target="RFC7862" pageno="false" format="default"/>.  </t>
      <section title="DDP-Eligibility" anchor="sec:nfs4-ddp-eligibility" toc="default">
        <t>Only the following XDR data items in the COMPOUND procedure of all NFS version 4 minor versions are DDP-eligible: <list style="symbols"><t>The opaque data field in the WRITE4args structure </t><t>The linkdata field of the NF4LNK arm in the createtype4 union </t><t>The opaque data field in the READ4resok structure </t><t>The linkdata field in the READLINK4resok structure </t><t>In minor version 2 and newer, the rpc_data field of the read_plus_content union (further restrictions on the use of this data item follow below).  </t></list> </t>
        <section title="READ_PLUS Replies" toc="default">
          <t>The NFS version 4.2 READ_PLUS operation returns a complex data type <xref target="RFC7862" pageno="false" format="default"/>.  The rpr_contents field in the result of this operation is an array of read_plus_content unions, one arm of which contains an opaque byte stream (d_data).  </t>
          <t>The size of d_data is limited to the value of the rpa_count field, but the protocol does not bound the number of elements which can be returned in the rpr_contents array.  In order to make the size of READ_PLUS replies predictable by NFS version 4.2 clients, the following restrictions are placed on the use of the READ_PLUS operation on RPC-over-RDMA transports: <list style="symbols"><t>An NFS version 4.2 client MUST NOT provide more than one Write chunk for any READ_PLUS operation.  When providing a Write chunk for a READ_PLUS operation, an NFS version 4.2 client MUST provide a Write chunk that is either empty (which forces all result data items for this operation to be returned inline) or large enough to receive rpa_count bytes in a single element of the rpr_contents array.  </t><t>If the Write chunk provided for a READ_PLUS operation by an NFS version 4.2 client is not empty, an NFS version 4.2 server MUST use that chunk for the first element of the rpr_contents array that has an rpc_data arm.  </t><t>An NFS version 4.2 server MUST NOT return more than two elements in the rpr_contents array of any READ_PLUS operation.  It returns as much of the requested byte range as it can fit within these two elements.  If the NFS version 4.2 server has not asserted rpr_eof in the reply, the NFS version 4.2 client SHOULD send additional READ_PLUS requests for any remaining bytes.  </t></list> </t>
        </section>
      </section>
      <section title="NFS Version 4 Reply Size Estimation" anchor="sec:nfs4-reply-size-estimation" toc="default">
        <t>An NFS version 4 client provides a Reply chunk when the maximum possible reply size is larger than the client's responder inline threshold.  </t>
        <t>There are certain NFS version 4 data items whose size cannot be estimated by clients reliably, however, because there is no protocol-specified size limit on these structures.  These include: <list style="symbols"><t>The attrlist4 field </t><t>Fields containing ACLs such as fattr4_acl, fattr4_dacl, fattr4_sacl </t><t>Fields in the fs_locations4 and fs_locations_info4 data structures </t><t>Opaque fields which pertain to pNFS layout metadata, such as loc_body, loh_body, da_addr_body, lou_body, lrf_body, fattr_layout_types and fs_layout_types, </t></list> </t>
        <section title="Reply Size Estimation for Minor Version 0" anchor="sec:reply-size-estimation-for-minor-version-0" toc="default">
          <t>The items enumerated above in <xref target="sec:nfs4-reply-size-estimation" pageno="false" format="default"/> make it difficult to predict the maximum size of GETATTR replies that interrogate variable-length attributes.  As discussed in <xref target="PREP-est"/>, client implementations can rely on their own internal architectural limits to bound the reply size.  However, since such limits are not guaranteed to be reliable, use of the techniques discussed in <xref target="PREP-retry"/> may sometimes be necessary.  </t>
          <t>It is best to avoid issuing single COMPOUNDs that contain both non-idempotent operations and operations where the maximum reply size cannot be reliably predicted.  </t>
        </section>
        <section title="Reply Size Estimation for Minor Version 1 And Newer" toc="default">
          <t>In NFS version 4.1 and newer minor versions, the csa_fore_chan_attrs argument of the CREATE_SESSION operation contains a ca_maxresponsesize field.  The value in this field can be taken as the absolute maximum size of replies generated by a replying NFS version 4 server.  </t>
          <t>This value can be used in cases where it is not possible to estimate a reply size upper bound precisely.  In practice, objects such as ACLs, named attributes, layout bodies, and security labels are much smaller than this maximum.  </t>
        </section>
      </section>
      <section title="NFS Version 4 COMPOUND Requests" toc="default">
        <t>The NFS version 4 COMPOUND procedure allows the transmission of more than one DDP-eligible data item per Call and Reply message.  An NFS version 4 client provides XDR Position values in each Read chunk to disambiguate which chunk is associated with which argument data item.  However, NFS version 4 server and client implementations must agree in advance on how to pair Write chunks with returned result data items.  </t>
        <t>The mechanism specified in Section 4.3.2 of <xref target="RFC8166"/>) is applied here, with additional restrictions that appear below.  In the following list, an "NFS Read" operation refers to any NFS Version 4 operation which has a DDP-eligible result data item (i.e., either a READ, READ_PLUS, or READLINK operation).  <list style="symbols"><t>If an NFS version 4 client wishes all DDP-eligible items in an NFS reply to be conveyed inline, it leaves the Write list empty.  </t><t>The first chunk in the Write list MUST be used by the first READ operation in an NFS version 4 COMPOUND procedure.  The next Write chunk is used by the next READ operation, and so on.  </t><t>If an NFS version 4 client has provided a matching non-empty Write chunk, then the corresponding READ operation MUST return its DDP-eligible data item using that chunk.  </t><t>If an NFS version 4 client has provided an empty matching Write chunk, then the corresponding READ operation MUST return all of its result data items inline.  </t><t>If an READ operation returns a union arm which does not contain a DDP-eligible result, and the NFS version 4 client has provided a matching non-empty Write chunk, an NFS version 4 server MUST return an empty Write chunk in that Write list position.  </t><t>If there are more READ operations than Write chunks, then remaining NFS Read operations in an NFS version 4 COMPOUND that have no matching Write chunk MUST return their results inline.  </t></list> </t>
        <section title="NFS Version 4 COMPOUND Example" toc="default">
          <t>The following example shows a Write list with three Write chunks, A, B, and C.  The NFS version 4 server consumes the provided Write chunks by writing the results of the designated operations in the compound request (READ and READLINK) back to each chunk.  </t>
          <figure title="" suppress-title="false" align="left" alt="" width="" height="">
            <artwork xml:space="preserve" name="" type="" align="left" alt="" width="" height="">

   Write list:

      A --&gt; B --&gt; C

   NFS version 4 COMPOUND request:

      PUTFH LOOKUP READ PUTFH LOOKUP READLINK PUTFH LOOKUP READ
                    |                   |                   |
                    v                   v                   v
                    A                   B                   C

</artwork>
          </figure>
          <t>If the NFS version 4 client does not want to have the READLINK result returned via RDMA, it provides an empty Write chunk for buffer B to indicate that the READLINK result must be returned inline.  </t>
        </section>
      </section>
      <section title="NFS Version 4 Callback" toc="default">
        <t>The NFS version 4 protocols support server-initiated callbacks to notify clients of events such as recalled delegations.  </t>
        <section title="NFS Version 4.0 Callback" toc="default">
          <t>NFS version 4.0 implementations typically employ a separate TCP connection to handle callback operations, even when the forward channel uses a RPC-over-RDMA transport.  </t>
          <t>No operation in the NFS version 4.0 callback RPC program conveys a significant data payload.  Therefore, no XDR data items in this RPC program is DDP-eligible.  </t>
          <t>A CB_RECALL reply is small and fixed in size.  The CB_GETATTR reply contains a variable-length fattr4 data item.  See <xref target="sec:reply-size-estimation-for-minor-version-0" pageno="false" format="default"/> for a discussion of reply size prediction for this data item.  </t>
          <t>An NFS version 4.0 client advertises netids and ad hoc port addresses for contacting its NFS version 4.0 callback service using the SETCLIENTID operation.  </t>
        </section>
        <section title="NFS Version 4.1 Callback" toc="default">
          <t>In NFS version 4.1 and newer minor versions, callback operations may appear on the same connection as is used for NFS version 4 forward channel client requests.  NFS version 4 clients and servers MUST use the mechanism described in <xref target="RFC8167"/> when backchannel operations are conveyed on RPC-over-RDMA transports.  </t>
          <t>The csa_back_chan_attrs argument of the CREATE_SESSION operation contains a ca_maxresponsesize field.  The value in this field can be taken as the absolute maximum size of backchannel replies generated by a replying NFS version 4 client.  </t>
          <t>There are no DDP-eligible data items in callback procedures defined in NFS version 4.1 or NFS version 4.2.  However, some callback operations, such as messages that convey device ID information, can be large, in which case a Long Call or Reply might be required.  </t>
          <t>When an NFS version 4.1 client reports a backchannel ca_maxrequestsize that is larger than the connection's inline thresholds, the NFS version 4 client can support Long Calls.  Otherwise an NFS version 4 server MUST use Short messages to convey backchannel operations.  </t>
        </section>
      </section>
      <section title="Session-Related Considerations" anchor="sec:session-related-considerations" toc="default">
        <t>Typically, the presence of an NFS session <xref target="RFC5661" pageno="false" format="default"/> has no effect on the operation of RPC-over-RDMA.  None of the operations introduced to support NFS sessions contain DDP-eligible data items.  There is no need to match the number of session slots with the number of available RPC-over-RDMA credits.  </t>
        <t>However, there are some rare error conditions which require special handling when an NFS session is operating on an RPC-over-RDMA transport.  For example, a requester might receive, in response to an RPC request, an RDMA_ERROR message with an rdma_err value of ERR_CHUNK, or an RDMA_MSG containing an RPC_GARBAGEARGS reply.  Within RPC-over-RDMA Version One, this class of error can be generated for two different reasons: <list style="symbols"><t>There was an XDR error detected parsing the RPC-over-RDMA headers.  </t><t>There was an error sending the response, because, for example, a necessary reply chunk was not provided or the one provided is of insufficient length.  </t></list> </t>
        <t>These two situations, which arise due to incorrect implementations or underestimation of reply size, have different implications with regard to Exactly-Once Semantics.  An XDR error in decoding the request precludes the execution of the request on the responder, but failure to send a reply indicates that some or all of the operations were executed.  </t>
        <t>In both instances, the client SHOULD NOT retry the operation without addressing reply resource inadequacy.  Such a retry can result in the same sort of error seen previously.  Instead, it is best to consider the operation as completed unsuccessfully and report an error to the consumer who requested the RPC.  </t>
        <t>In addition, within the error response, the requester does not have the result of the execution of the SEQUENCE operation, which identifies the session, slot, and sequence id for the request which has failed.  The xid associated with the request, obtained from the rdma_xid field of the RDMA_ERROR or RDMA_MSG message, must be used to determine the session and slot for the request which failed, and the slot must be properly retired.  If this is not done, the slot could be rendered permanently unavailable.  </t>
      </section>
      <section title="Connection Keep-Alive" toc="default">
        <t>NFS version 4 client implementations often rely on a transport-layer keep-alive mechanism to detect when an NFS version 4 server has become unresponsive.  When an NFS server is no longer responsive, client-side keep-alive terminates the connection, which in turn triggers reconnection and RPC retransmission.  </t>
        <t>Some RDMA transports (such as Reliable Connections on InfiniBand) have no keep-alive mechanism.  Without a disconnect or new RPC traffic, such connections can remain alive long after an NFS server has become unresponsive.  Once an NFS client has consumed all available RPC-over-RDMA credits on that transport connection, it will forever await a reply before sending another RPC request.  </t>
        <t>NFS version 4 clients SHOULD reserve one RPC-over-RDMA credit to use for periodic server or connection health assessment.  This credit can be used to drive an RPC request on an otherwise idle connection, triggering either a quick affirmative server response or immediate connection termination.  </t>
      </section>
    </section>
    <section title="Extending NFS Upper Layer Bindings" toc="default">
      <t>RPC programs such as NFS are required to have an Upper Layer Binding specification to interoperate on RPC-over-RDMA transports <xref target="RFC8166"/>.  Via standards action, the Upper Layer Binding specified in this document can be extended to cover versions of the NFS version 4 protocol specified after NFS version 4 minor version 2, or separately published extensions to an existing NFS version 4 minor version, as described in <xref target="RFC8178"/>.  </t>
    </section>
    <section title="IANA Considerations" toc="default">
      <t>NFS use of direct data placement introduces a need for an additional NFS port number assignment for networks that share traditional UDP and TCP port spaces with RDMA services.  The iWARP <xref target="RFC5041" pageno="false" format="default"/> <xref target="RFC5040" pageno="false" format="default"/> protocol is such an example (InfiniBand is not).  </t>
      <t>NFS servers for versions 2 and 3 <xref target="RFC1094" pageno="false" format="default"/> <xref target="RFC1813" pageno="false" format="default"/> traditionally listen for clients on UDP and TCP port 2049, and additionally, they register these with the portmapper and/or rpcbind <xref target="RFC1833" pageno="false" format="default"/> service.  However, <xref target="RFC7530" pageno="false" format="default"/> requires NFS version 4 servers to listen on TCP port 2049, and they are not required to register.  </t>
      <t>An NFS version 2 or version 3 server supporting RPC-over-RDMA on such a network and registering itself with the RPC portmapper MAY choose an arbitrary port, or MAY use the alternative well-known port number for its RPC-over-RDMA service.  The chosen port MAY be registered with the RPC portmapper under the netid assigned by the requirement in <xref target="RFC8166"/>.  </t>
      <t>An NFS version 4 server supporting RPC-over-RDMA on such a network MUST use the alternative well-known port number for its RPC-over-RDMA service.  Clients SHOULD connect to this well-known port without consulting the RPC portmapper (as for NFS version 4 on TCP transports).  </t>
      <t>The port number assigned to an NFS service over an RPC-over-RDMA transport is available from the IANA port registry <xref target="RFC3232" pageno="false" format="default"/>.  </t>
    </section>
    <section title="Security Considerations" toc="default">
      <t>RPC-over-RDMA supports all RPC security models, including RPCSEC_GSS security and transport-level security <xref target="RFC2203" pageno="false" format="default"/>.  The choice of RDMA Read and RDMA Write to convey RPC argument and results does not affect this, since it changes only the method of data transfer.  Specifically, the requirements of <xref target="RFC8166"/> ensure that this choice does not introduce new vulnerabilities.  </t>
      <t>Because this document defines only the binding of the NFS protocols atop <xref target="RFC8166"/>, all relevant security considerations are therefore to be described at that layer.  </t>
    </section>
  </middle>
  <back>
    <references title="Normative References">
      <reference anchor="RFC1833" target="http://www.rfc-editor.org/info/rfc1833">
        <front>
          <title>Binding Protocols for ONC RPC Version 2</title>
          <author initials="R." surname="Srinivasan" fullname="R. Srinivasan">
            <organization/>
          </author>
          <date year="1995" month="August"/>
          <abstract>
            <t>This document describes the binding protocols used in conjunction with the ONC Remote Procedure Call (ONC RPC Version 2) protocols. [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="1833"/>
        <seriesInfo name="DOI" value="10.17487/RFC1833"/>
      </reference>
      <reference anchor="RFC2119" target="http://www.rfc-editor.org/info/rfc2119">
        <front>
          <title>Key words for use in RFCs to Indicate Requirement Levels</title>
          <author initials="S." surname="Bradner" fullname="S. Bradner">
            <organization/>
          </author>
          <date year="1997" month="March"/>
          <abstract>
            <t>In many standards track documents several words are used to signify the requirements in the specification.  These words are often capitalized. This document defines these words as they should be interpreted in IETF documents.  This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
          </abstract>
        </front>
        <seriesInfo name="BCP" value="14"/>
        <seriesInfo name="RFC" value="2119"/>
        <seriesInfo name="DOI" value="10.17487/RFC2119"/>
      </reference>
      <reference anchor="RFC2203" target="http://www.rfc-editor.org/info/rfc2203">
        <front>
          <title>RPCSEC_GSS Protocol Specification</title>
          <author initials="M." surname="Eisler" fullname="M. Eisler">
            <organization/>
          </author>
          <author initials="A." surname="Chiu" fullname="A. Chiu">
            <organization/>
          </author>
          <author initials="L." surname="Ling" fullname="L. Ling">
            <organization/>
          </author>
          <date year="1997" month="September"/>
          <abstract>
            <t>This memo describes an ONC/RPC security flavor that allows RPC protocols to access the Generic Security Services Application Programming Interface (referred to henceforth as GSS-API).  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="2203"/>
        <seriesInfo name="DOI" value="10.17487/RFC2203"/>
      </reference>
      <reference anchor="RFC5661" target="http://www.rfc-editor.org/info/rfc5661">
        <front>
          <title>Network File System (NFS) Version 4 Minor Version 1 Protocol</title>
          <author initials="S." surname="Shepler" fullname="S. Shepler" role="editor">
            <organization/>
          </author>
          <author initials="M." surname="Eisler" fullname="M. Eisler" role="editor">
            <organization/>
          </author>
          <author initials="D." surname="Noveck" fullname="D. Noveck" role="editor">
            <organization/>
          </author>
          <date year="2010" month="January"/>
          <abstract>
            <t>This document describes the Network File System (NFS) version 4 minor version 1, including features retained from the base protocol (NFS version 4 minor version 0, which is specified in RFC 3530) and protocol extensions made subsequently.  Major extensions introduced in NFS version 4 minor version 1 include Sessions, Directory Delegations, and parallel NFS (pNFS).  NFS version 4 minor version 1 has no dependencies on NFS version 4 minor version 0, and it is considered a separate protocol.  Thus, this document neither updates nor obsoletes RFC 3530.  NFS minor version 1 is deemed superior to NFS minor version 0 with no loss of functionality, and its use is preferred over version 0.  Both NFS minor versions 0 and 1 can be used simultaneously on the same network, between the same client and server.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="5661"/>
        <seriesInfo name="DOI" value="10.17487/RFC5661"/>
      </reference>
      <reference anchor="RFC7530" target="http://www.rfc-editor.org/info/rfc7530">
        <front>
          <title>Network File System (NFS) Version 4 Protocol</title>
          <author initials="T." surname="Haynes" fullname="T. Haynes" role="editor">
            <organization/>
          </author>
          <author initials="D." surname="Noveck" fullname="D. Noveck" role="editor">
            <organization/>
          </author>
          <date year="2015" month="March"/>
          <abstract>
            <t>The Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813).  Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added.  Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.</t>
            <t>This document, together with the companion External Data Representation (XDR) description document, RFC 7531, obsoletes RFC 3530 as the definition of the NFS version 4 protocol.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7530"/>
        <seriesInfo name="DOI" value="10.17487/RFC7530"/>
      </reference>
      <reference anchor="RFC7862" target="http://www.rfc-editor.org/info/rfc7862">
        <front>
          <title>Network File System (NFS) Version 4 Minor Version 2 Protocol</title>
          <author initials="T." surname="Haynes" fullname="T. Haynes">
            <organization/>
          </author>
          <date year="2016" month="November"/>
          <abstract>
            <t>This document describes NFS version 4 minor version 2; it describes the protocol extensions made from NFS version 4 minor version 1. Major extensions introduced in NFS version 4 minor version 2 include the following: Server-Side Copy, Application Input/Output (I/O) Advise, Space Reservations, Sparse Files, Application Data Blocks, and Labeled NFS.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="7862"/>
        <seriesInfo name="DOI" value="10.17487/RFC7862"/>
      </reference>
      &RFC8166;
      &RFC8167;
      &RFC8178;
    </references>
    <references title="Informative References">
      <reference anchor="I-D.ietf-nfsv4-rfc5667bis">
        <front>
          <title>Remote Direct Memory Access Transport for Remote Procedure Call, Version One</title>
          <author initials="C" surname="Lever" fullname="Chuck Lever">
            <organization/>
          </author>
          <date month="May" year="2017"/>
        </front>
        <seriesInfo name="Internet-Draft" value="draft-ietf-nfsv4-rfc5667bis-11"/>
        <format type="TXT" target="http://www.ietf.org/internet-drafts/draft-ietf-nfsv4-rfc5667bis-04.txt"/>
      </reference>

      <reference anchor="rpcrdmav2" 
                 target="http://www.ietf.org/id/draft-cel-nfsv4-rpcrdma-version-two-05.txt">
        <front>
          <title>
            RPC-over-RDMA Version Two
          </title>

          <author initials="C." surname="Lever" role="editor">
            <organization>Oracle</organization>
          </author>
          <author initials="D." surname="Noveck">
            <organization>Netapp</organization>
          </author>
          <date month="July" year="2017" />
        </front>
        <annotation>
          Work in progress.
        </annotation>
      </reference>
      <reference anchor="RFC1094" target="http://www.rfc-editor.org/info/rfc1094">
        <front>
          <title>NFS: Network File System Protocol specification</title>
          <author initials="B." surname="Nowicki" fullname="B. Nowicki">
            <organization/>
          </author>
          <date year="1989" month="March"/>
          <abstract>
            <t>This RFC describes a protocol that Sun Microsystems, Inc., and others are using.  A new version of the protocol is under development, but others may benefit from the descriptions of the current protocol, and discussion of some of the design issues.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="1094"/>
        <seriesInfo name="DOI" value="10.17487/RFC1094"/>
      </reference>
      <reference anchor="RFC1813" target="http://www.rfc-editor.org/info/rfc1813">
        <front>
          <title>NFS Version 3 Protocol Specification</title>
          <author initials="B." surname="Callaghan" fullname="B. Callaghan">
            <organization/>
          </author>
          <author initials="B." surname="Pawlowski" fullname="B. Pawlowski">
            <organization/>
          </author>
          <author initials="P." surname="Staubach" fullname="P. Staubach">
            <organization/>
          </author>
          <date year="1995" month="June"/>
          <abstract>
            <t>This paper describes the NFS version 3 protocol.  This paper is provided so that people can write compatible implementations.  This memo provides information for the Internet community.  This memo does not specify an Internet standard of any kind.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="1813"/>
        <seriesInfo name="DOI" value="10.17487/RFC1813"/>
      </reference>
      <reference anchor="RFC3232" target="http://www.rfc-editor.org/info/rfc3232">
        <front>
          <title>Assigned Numbers: RFC 1700 is Replaced by an On-line Database</title>
          <author initials="J." surname="Reynolds" fullname="J. Reynolds" role="editor">
            <organization/>
          </author>
          <date year="2002" month="January"/>
          <abstract>
            <t>This memo obsoletes RFC 1700 (STD 2) "Assigned Numbers", which contained an October 1994 snapshot of assigned Internet protocol parameters.  This memo provides information for the Internet community.</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="3232"/>
        <seriesInfo name="DOI" value="10.17487/RFC3232"/>
      </reference>
      <reference anchor="RFC5040" target="http://www.rfc-editor.org/info/rfc5040">
        <front>
          <title>A Remote Direct Memory Access Protocol Specification</title>
          <author initials="R." surname="Recio" fullname="R. Recio">
            <organization/>
          </author>
          <author initials="B." surname="Metzler" fullname="B. Metzler">
            <organization/>
          </author>
          <author initials="P." surname="Culley" fullname="P. Culley">
            <organization/>
          </author>
          <author initials="J." surname="Hilland" fullname="J. Hilland">
            <organization/>
          </author>
          <author initials="D." surname="Garcia" fullname="D. Garcia">
            <organization/>
          </author>
          <date year="2007" month="October"/>
          <abstract>
            <t>This document defines a Remote Direct Memory Access Protocol (RDMAP) that operates over the Direct Data Placement Protocol (DDP protocol).  RDMAP provides read and write services directly to applications and enables data to be transferred directly into Upper Layer Protocol (ULP) Buffers without intermediate data copies.  It also enables a kernel bypass implementation.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="5040"/>
        <seriesInfo name="DOI" value="10.17487/RFC5040"/>
      </reference>
      <reference anchor="RFC5041" target="http://www.rfc-editor.org/info/rfc5041">
        <front>
          <title>Direct Data Placement over Reliable Transports</title>
          <author initials="H." surname="Shah" fullname="H. Shah">
            <organization/>
          </author>
          <author initials="J." surname="Pinkerton" fullname="J. Pinkerton">
            <organization/>
          </author>
          <author initials="R." surname="Recio" fullname="R. Recio">
            <organization/>
          </author>
          <author initials="P." surname="Culley" fullname="P. Culley">
            <organization/>
          </author>
          <date year="2007" month="October"/>
          <abstract>
            <t>The Direct Data Placement protocol provides information to Place the  incoming data directly into an upper layer protocol's receive buffer  without intermediate buffers.  This removes excess CPU and memory  utilization associated with transferring data through the  intermediate buffers.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="5041"/>
        <seriesInfo name="DOI" value="10.17487/RFC5041"/>
      </reference>
      <reference anchor="RFC5667" target="http://www.rfc-editor.org/info/rfc5667">
        <front>
          <title>Network File System (NFS) Direct Data Placement</title>
          <author initials="T." surname="Talpey" fullname="T. Talpey">
            <organization/>
          </author>
          <author initials="B." surname="Callaghan" fullname="B. Callaghan">
            <organization/>
          </author>
          <date year="2010" month="January"/>
          <abstract>
            <t>This document defines the bindings of the various Network File System (NFS) versions to the Remote Direct Memory Access (RDMA) operations supported by the RPC/RDMA transport protocol.  It describes the use of direct data placement by means of server-initiated RDMA operations into client-supplied buffers for implementations of NFS versions 2, 3, 4, and 4.1 over such an RDMA transport.  [STANDARDS-TRACK]</t>
          </abstract>
        </front>
        <seriesInfo name="RFC" value="5667"/>
        <seriesInfo name="DOI" value="10.17487/RFC5667"/>
      </reference>
      <reference anchor="NSM">
        <front>
          <title>Protocols for Interworking: XNFS, Version 3W</title>
          <author>
            <organization>The Open Group</organization>
          </author>
          <date month="February" year="1998"/>
          <abstract>
            <t>This Technical Standard is aligned with Sun's NFS Version 3, and incorporates the Sun WebNFS&#8482; extensions.  The process of accessing remote files and directories as though they were part of the local file system hierarchy is commonly known as Transparent File Access (TFA).  The most widely used heterogeneous TFS architecture is the Network File System (NFS), originally developed by Sun Microsytems.  The Open Group XNFS offers a complete solution to transparent file access between open system-compliant systems, through the XNFS protocols for interoperability, and The Open Group XSI interfaces for application/user portability (as identified in several XNFS appendixes).  </t>
          </abstract>
        </front>
      </reference>
    </references>
    <section title="Acknowledgments" toc="default">
      <t>
        The author gratefully acknowledges the work of Brent Callaghan and
        Tom Talpey on the original NFS Direct Data Placement specification 
        <xref target="RFC5667"/>.  
      </t>
      <t>
        A large part of the material in this doccument is taken from
        <xref target="I-D.ietf-nfsv4-rfc5667bis"/> written by Chuck Lever.  The author 
        wishes to acknowlege the debt he owes to Chuck for his work in
        providing an updated Upper Layer Binding for the NFS-related
        protocols. 
      </t>
      <t>
        The author also wishes to thank Bill Baker and Greg Marsden for
        their support of the work to revive RPC-over-RDMA.  
      </t>
    </section>
  </back>
</rfc>
