idnits 2.17.1 draft-ietf-nfsv4-rpcrdma-bidirection-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 8, 2017) is 2634 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-11) exists of draft-ietf-nfsv4-rfc5666bis-10 ** Obsolete normative reference: RFC 5661 (Obsoleted by RFC 8881) Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network File System Version 4 C. Lever 3 Internet-Draft Oracle 4 Intended status: Standards Track February 8, 2017 5 Expires: August 12, 2017 7 Bi-directional Remote Procedure Call On RPC-over-RDMA Transports 8 draft-ietf-nfsv4-rpcrdma-bidirection-07 10 Abstract 12 Minor versions of Network File System (NFS) version 4 newer than 13 minor version 0 work best when Remote Procedure Call (RPC) transports 14 can send RPC transactions in both directions on the same connection. 15 This document describes how RPC transport endpoints capable of Remote 16 Direct Memory Access (RDMA) convey RPCs in both directions on a 17 single connection. 19 Requirements Language 21 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 22 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 23 document are to be interpreted as described in [RFC2119]. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on August 12, 2017. 42 Copyright Notice 44 Copyright (c) 2017 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 60 2. Understanding RPC Direction . . . . . . . . . . . . . . . . . 3 61 3. Immediate Uses Of Bi-Directional RPC-over-RDMA . . . . . . . 5 62 4. Flow Control . . . . . . . . . . . . . . . . . . . . . . . . 6 63 5. Sending And Receiving Operations In The Reverse Direction . . 8 64 6. In the Absence of Support For Reverse Direction Operation . . 11 65 7. Considerations For Upper Layer Bindings . . . . . . . . . . . 12 66 8. Security Considerations . . . . . . . . . . . . . . . . . . . 12 67 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 68 10. Normative References . . . . . . . . . . . . . . . . . . . . 12 69 Appendix A. Acknowledgements . . . . . . . . . . . . . . . . . . 13 70 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 13 72 1. Introduction 74 RPC-over-RDMA transports, introduced in [I-D.ietf-nfsv4-rfc5666bis], 75 efficiently convey Remote Procedure Call transactions (RPCs) on 76 transport layers capable of Remote Direct Memory Access (RDMA). The 77 purpose of this document is to enable concurrent operation in both 78 directions on a single transport connection using RPC-over-RDMA 79 protocol versions that do not have specific facilities for reverse 80 direction operation. 82 Reverse direction RPC transactions are necessary for the operation of 83 version 4.1 of the Network File System (NFS), and in particular, of 84 Parallel NFS (pNFS) [RFC5661], though any Upper Layer Protocol 85 implementation may make use of them. An Upper Layer Binding for NFS 86 version 4.x callback operation is additionally required (see 87 Section 7), but is not provided in this document. 89 For example, using the approach described herein, RPC transactions 90 can be conveyed in both directions on the same RPC-over-RDMA Version 91 One connection without changes to the RPC-over-RDMA Version One 92 protocol. This document does not update the protocol specified in 93 [I-D.ietf-nfsv4-rfc5666bis]. 95 The remainder of this document assumes familiarity with the 96 terminology and concepts contained in [I-D.ietf-nfsv4-rfc5666bis], 97 especially Sections 2 and 3. 99 2. Understanding RPC Direction 101 The Remote Procedure Call (ONC RPC) protocol as described in 102 [RFC5531] is architected as a message-passing protocol between one 103 server and one or more clients. ONC RPC transactions are made up of 104 two types of messages. 106 A CALL message, or "Call", requests work. A Call is designated by 107 the value CALL in the message's msg_type field. An arbitrary unique 108 value is placed in the message's XID field. A host that originates a 109 Call is referred to in this document as a "Requester." 111 A REPLY message, or "Reply", reports the results of work requested by 112 a Call. A Reply is designated by the value REPLY in the message's 113 msg_type field. The value contained in the message's XID field is 114 copied from the Call whose results are being returned. A host that 115 emits a Reply is referred to as a "Responder." 117 Typically, a Call results in a corresponding Reply. A Reply is never 118 sent without a corresponding Call. 120 RPC-over-RDMA is a connection-oriented RPC transport. In all cases, 121 when a connection-oriented transport is used, ONC RPC client 122 endpoints are responsible for initiating transport connections, while 123 ONC RPC service endpoints passively await incoming connection 124 requests. 126 RPC direction on connectionless RPC transports is not addressed in 127 this document. 129 2.1. Forward Direction 131 Traditionally, an ONC RPC client acts as a Requester, while an ONC 132 RPC service acts as a Responder. This form of message passing is 133 referred to as "forward direction" operation. 135 2.2. Reverse Direction 137 The ONC RPC specification [RFC5531] does not forbid passing messages 138 in the other direction. An ONC RPC service endpoint can act as a 139 Requester, in which case an ONC RPC client endpoint acts as a 140 Responder. This form of message passing is referred to as "reverse 141 direction" operation. 143 During reverse direction operation, the ONC RPC client is responsible 144 for establishing transport connections, even though ONC RPC Calls 145 come from the ONC RPC server. 147 ONC RPC clients and servers are optimized to perform and scale well 148 while handling traffic in the forward direction, and might not be 149 prepared to handle operation in the reverse direction. Not until NFS 150 version 4.1 [RFC5661] has there been a strong need to handle reverse 151 direction operation. 153 2.3. Bi-directional Operation 155 A pair of connected RPC endpoints may choose to use only forward or 156 only reverse direction operations on a particular transport. Or, 157 these endpoints may send Calls in both directions concurrently on the 158 same transport. 160 "Bi-directional operation" occurs when both transport endpoints act 161 as a Requester and a Responder at the same time. 163 Bi-directionality is an extension of RPC transport connection 164 sharing. Two RPC endpoints wish to exchange independent RPC messages 165 over a shared connection, but in opposite directions. These messages 166 may or may not be related to the same workloads or RPC Programs. 168 2.4. XID Values 170 Section 9 of [RFC5531] introduces the ONC RPC transaction identifier, 171 or "XID" for short. The value of an XID is interpreted in the 172 context of the message's msg_type field. 174 o The XID of a Call is arbitrary but is unique among outstanding 175 Calls from that Requester. 177 o The XID of a Reply always matches that of the initiating Call. 179 When receiving a Reply, a Requester matches the XID value in the 180 Reply with a Call it previously sent. 182 2.4.1. XID Generation 184 During bi-directional operation, forward and reverse direction XIDs 185 are typically generated on distinct hosts by possibly different 186 algorithms. There is no co-ordination between forward and reverse 187 direction XID generation. 189 Therefore, a forward direction Requester MAY use the same XID value 190 at the same time as a reverse direction Requester on the same 191 transport connection. Though such concurrent requests use the same 192 XID value, they represent distinct ONC RPC transactions. 194 3. Immediate Uses Of Bi-Directional RPC-over-RDMA 196 3.1. NFS version 4.0 Callback Operation 198 An NFS version 4.0 client employs a traditional ONC RPC client to 199 send NFS requests to an NFS version 4.0 server's traditional ONC RPC 200 service [RFC7530]. NFS version 4.0 requests flow in the forward 201 direction on a connection established by the client. This connection 202 is referred to as a "forechannel" connection. 204 An NFS version 4.x "delegation" is simply a promise made by a server 205 that it will notify a client before another client or program running 206 on the server is allowed access to a file. With this guarantee, that 207 client can operate as sole accessor of the file. In particular, it 208 can manage the file's data and metadata caches aggressively. 210 To administer file delegations, NFS version 4.0 introduces the use of 211 callback operations, or "callbacks", in Section 10.2 of [RFC7530]. 212 An NFS version 4.0 server sets up a forward direction ONC RPC client, 213 and an NFS version 4.0 client sets up a forward direction ONC RPC 214 service. Callbacks flow in the forward direction on a connection 215 established between the server's callback client, and the client's 216 callback service. This connection is distinct from connections being 217 used as forechannels, and is referred to as a "backchannel 218 connection." 220 When an RDMA transport is used as a forechannel, an NFS version 4.0 221 client typically provides a TCP-based callback service. The client's 222 SETCLIENTID operation advertises the callback service endpoint with a 223 "tcp" or "tcp6" netid. The server then connects to this service 224 using a TCP socket. 226 NFS version 4.0 implementations can function without a backchannel in 227 place. In this case, the NFS server does not grant file delegations. 228 This might result in a negative performance effect, but correctness 229 is not affected. 231 3.2. NFS version 4.1 Callback Operation 233 NFS version 4.1 supports file delegation in a similar fashion to NFS 234 version 4.0, and extends the callback mechanism to manage pNFS 235 layouts, as discussed in Section 12 of [RFC5661]. 237 NFS version 4.1 transport connections are initiated by NFS version 238 4.1 clients. Therefore NFS version 4.1 servers send callbacks to 239 clients in the reverse direction on connections established by NFS 240 version 4.1 clients. 242 NFS version 4.1 clients and servers indicate to their peers that a 243 backchannel capability is available on a given transport in the 244 arguments and results of the NFS CREATE_SESSION or 245 BIND_CONN_TO_SESSION operations. 247 NFS version 4.1 clients may establish distinct transport connections 248 for forechannel and backchannel operation, or they may combine 249 forechannel and backchannel operation on one transport connection 250 using bi-directional operation. 252 Without a reverse direction RPC-over-RDMA capability, an NFS version 253 4.1 client must additionally connect using a transport with reverse 254 direction capability to use as a backchannel. Opening an independent 255 TCP socket is the only choice for an NFS version 4.1 backchannel 256 connection in this case. 258 Implementations often find it more convenient to use a single 259 combined transport (i.e. a transport that is capable of bi- 260 directional operation). This simplifies connection establishment and 261 recovery during network partitions or when one endpoint restarts. 262 This can also enable better scaling by using fewer transport 263 connections to perform the same work. 265 As with NFS version 4.0, if a backchannel is not in use, an NFS 266 version 4.1 server does not grant delegations. Because NFS version 267 4.1 relies on callbacks to manage pNFS layout state, pNFS operation 268 is not possible without a backchannel. 270 4. Flow Control 272 For an RDMA Send operation to work properly, the receiving peer must 273 have posted a receive buffer in which to accept the incoming message. 274 If a receiver hasn't posted enough buffers to accommodate each 275 incoming Send operation, the receiving RDMA provider is allowed to 276 terminate the RDMA connection. 278 RPC-over-RDMA transport protocols provide built-in send flow control 279 to prevent overrunning the number of pre-posted receive buffers on a 280 connection's receive endpoint. For RPC-over-RDMA Version One, this 281 is discussed in Section 4.3 of [I-D.ietf-nfsv4-rfc5666bis]. 283 4.1. Reverse-direction Credits 285 RPC-over-RDMA credits work the same way in the reverse direction as 286 they do in the forward direction. However, forward direction credits 287 and reverse direction credits on the same connection are accounted 288 separately. 290 The forward direction credit value retains the same meaning whether 291 or not there are reverse direction resources associated with an RPC- 292 over-RDMA transport connection. This is the number of RPC requests 293 the forward direction responder (the ONC RPC server) is prepared to 294 receive concurrently. 296 The reverse direction credit value is the number of RPC requests the 297 reverse direction responder (the ONC RPC client) is prepared to 298 receive concurrently. The reverse direction credit value MAY be 299 different than the forward direction credit value. 301 During bi-directional operation, each receiver has to decide whether 302 an incoming message contains a credit request (the receiver is acting 303 as a responder) or a credit grant (the receiver is acting as a 304 requester) and apply the credit value accordingly. 306 When message direction is not fully determined by context (e.g., 307 suggested by the definition of the RPC-over-RDMA version that is in 308 use) or by an accompanying RPC message payload with a call direction 309 field, it is not possible for the receiver to tell with certainty 310 whether the header credit value is a request or grant. In such 311 cases, the receiver MUST ignore the header's credit value. 313 4.2. Inline Thresholds 315 Forward and reverse direction operation on the same connection share 316 the same receive buffers. Therefore the inline threshold values for 317 the forward direction and the reverse direction are the same. The 318 call inline threshold for the reverse direction is the same as the 319 reply inline threshold for the forward direction, and vice versa. 320 For more information, see Section 4.3.2 of 321 [I-D.ietf-nfsv4-rfc5666bis]. 323 4.3. Managing Receive Buffers 325 An RPC-over-RDMA transport endpoint must post receive buffers before 326 it can receive and process incoming RPC-over-RDMA messages. If a 327 sender transmits a message for a receiver which has no posted receive 328 buffer, the RDMA provider is allowed to drop the RDMA connection. 330 4.3.1. Client Receive Buffers 332 Typically an RPC-over-RDMA Requester posts only as many receive 333 buffers as there are outstanding RPC Calls. A client endpoint 334 without reverse direction support might therefore at times have no 335 available receive buffers. 337 To receive incoming reverse direction Calls, an RPC-over-RDMA client 338 endpoint must post enough additional receive buffers to match its 339 advertised reverse direction credit value. Each outstanding forward 340 direction RPC requires an additional receive buffer above this 341 minimum. 343 When an RDMA transport connection is lost, all active receive buffers 344 are flushed and are no longer available to receive incoming messages. 345 When a fresh transport connection is established, a client endpoint 346 must re-post a receive buffer to handle the Reply for each 347 retransmitted forward direction Call, and a full set of receive 348 buffers to handle reverse direction Calls. 350 4.3.2. Server Receive Buffers 352 A forward direction RPC-over-RDMA service endpoint posts as many 353 receive buffers as it expects incoming forward direction Calls. That 354 is, it posts no fewer buffers than the number of credits granted in 355 the rdma_credit field of forward direction RPC replies. 357 To receive incoming reverse direction replies, an RPC-over-RDMA 358 server endpoint must post enough additional receive buffers to handle 359 replies for each reverse direction Call it sends. 361 When the existing transport connection is lost, all active receive 362 buffers are flushed and are no longer available to receive incoming 363 messages. When a fresh transport connection is established, a server 364 endpoint must re-post a receive buffer to handle the Reply for each 365 retransmitted reverse direction Call, and a full set of receive 366 buffers for receiving forward direction Calls. 368 5. Sending And Receiving Operations In The Reverse Direction 370 The operation of RPC-over-RDMA transports in the forward direction is 371 defined in [RFC5531] and [I-D.ietf-nfsv4-rfc5666bis]. In this 372 section, a mechanism for reverse direction operation on RPC-over-RDMA 373 is defined. Reverse direction operation used in combination with 374 forward operation enables bi-directional communication on a common 375 RPC-over-RDMA transport connection. 377 Certain fields in the RPC-over-RDMA header have a fixed position in 378 all versions of RPC-over-RDMA. The normative specification of these 379 fields is contained in Section 5.1 of [I-D.ietf-nfsv4-rfc5666bis]. 381 5.1. Sending A Call In The Reverse Direction 383 To form a reverse direction RPC-over-RDMA Call message, an ONC RPC 384 service endpoint constructs an RPC-over-RDMA header containing a 385 fresh RPC XID in the rdma_xid field (see Section 2.4 for full 386 requirements). 388 The rdma_vers field MUST contain the same value in reverse and 389 forward direction Call messages on the same connection. 391 The number of requested reverse direction credits is placed in the 392 rdma_credit field (see Section 4). 394 Whether presented inline or as a separate chunk, the ONC RPC Call 395 header MUST start with the same XID value that is present in the RPC- 396 over-RDMA header, and the RPC header's msg_type field MUST contain 397 the value CALL. 399 5.2. Sending A Reply In The Reverse Direction 401 To form a reverse direction RPC-over-RDMA Reply message, an ONC RPC 402 client endpoint constructs an RPC-over-RDMA header containing a copy 403 of the matching ONC RPC Call's RPC XID in the rdma_xid field (see 404 Section 2.4 for full requirements). 406 The rdma_vers field MUST contain the same value in a reverse 407 direction Reply message as in the matching Call message. 409 The number of granted reverse direction credits is placed in the 410 rdma_credit field (see Section 4). 412 Whether presented inline or as a separate chunk, the ONC RPC Reply 413 header MUST start with the same XID value that is present in the RPC- 414 over-RDMA header, and the RPC header's msg_type field MUST contain 415 the value REPLY. 417 5.3. Using Chunks In Reverse Direction Operations 419 A "chunk" refers to a portion of a message's Payload stream that is 420 moved by a separate mechanism. Chunk data may be moved by an 421 explicit RDMA operation, for example. Chunks are defined in 422 Section 3.4.4 of [I-D.ietf-nfsv4-rfc5666bis]. 424 Chunks MAY be used in the reverse direction. They operate the same 425 way as in the forward direction. 427 A backchannel implementation might not support any Upper Layer 428 Protocol that has DDP-eligible data items. Such Upper Layer 429 Protocols may use only small messages, or they may have a native 430 mechanism for restricting the size of reverse direction RPC messages, 431 obviating the need to handle Long Messages in the reverse direction. 433 When there is no Upper Layer Protocol requirement for chunks in the 434 reverse direction, implementers can choose not to provide support for 435 chunks in the reverse direction. This avoids the complexity of 436 adding support for performing RDMA Reads and Writes in the reverse 437 direction. 439 When chunks are not implemented, RPC messages in the reverse 440 direction are always sent using a Short message, and therefore can be 441 no larger than what can be sent inline (that is, without chunks). 442 Sending an inline message larger than the inline threshold can result 443 in loss of connection. 445 If a reverse direction requester provides a non-empty chunk list to a 446 responder that does not support chunks, the responder MUST reply with 447 an RDMA_ERROR message with rdma_err field set to ERR_CHUNK. 449 5.4. Reverse Direction Retransmission 451 In rare cases, an ONC RPC service cannot complete an RPC transaction 452 and then send a reply. This can be because the transport connection 453 was lost, the Call or Reply message was dropped, or because the Upper 454 Layer consumer delayed or dropped the ONC RPC request. Typically, 455 the Requester sends the transaction again, reusing the same RPC XID. 456 This is known as an "RPC retransmission". 458 In the forward direction, the Requester is the ONC RPC client. The 459 client is always responsible for establishing a transport connection 460 before sending again. 462 In the reverse direction, the Requester is the ONC RPC server. 463 Because an ONC RPC server does not establish transport connections 464 with clients, it cannot send a retransmission if there is no 465 transport connection. It must wait for the ONC RPC client to re- 466 establish the transport connection before it can retransmit ONC RPC 467 transactions in the reverse direction. 469 If an ONC RPC client has no work to do, it may be some time before it 470 re-establishes a transport connection. Reverse direction Requesters 471 must be prepared to wait indefinitely for a connection to be 472 established before a pending reverse direction ONC RPC Call can be 473 retransmitted. 475 Forward direction Requesters are responsible for maintaining a 476 transport connection as long as there is the possibility of reverse 477 direction requests. For example, an NFS version 4.1 client with open 478 delegated files or active pNFS layouts should maintain a transport 479 connection to enable the NFS server to perform callback operations. 481 6. In the Absence of Support For Reverse Direction Operation 483 An RPC-over-RDMA transport endpoint might not support reverse 484 direction operation (and thus it does not support bi-directional 485 operation). There might be no mechanism in the transport 486 implementation to do so. Or in an implementation that can support 487 operation in the reverse direction, the Upper Layer Protocol consumer 488 might not yet have configured or enabled the transport to handle 489 reverse direction traffic. 491 If an endpoint is not prepared to receive an incoming reverse 492 direction message, loss of the RDMA connection might result. Thus 493 denial of service could result if a sender continues to send reverse 494 direction messages after every transport reconnect to an endpoint 495 that is not prepared to receive them. 497 When dealing with the possibility that the remote peer has no 498 transport level support for reverse direction operation, the Upper 499 Layer Protocol becomes responsible for informing peers when reverse 500 direction operation is supported. Otherwise even a simple reverse 501 direction RPC NULL procedure from a peer could result in a lost 502 connection. 504 Therefore, an Upper Layer Protocol consumer MUST NOT perform reverse 505 direction ONC RPC operations until the peer consumer has indicated it 506 is prepared to handle them. A description of Upper Layer Protocol 507 mechanisms used for this indication is outside the scope of this 508 document. 510 For example, an NFS version 4.1 server does not send backchannel 511 messages to an NFS version 4.1 client before the NFS version 4.1 512 client has sent a CREATE_SESSION or a BIND_CONN_TO_SESSION operation. 513 As long as an NFS version 4.1 client has prepared appropriate 514 resources to receive reverse direction operations before sending one 515 of these NFS operations, denial of service is avoided. 517 7. Considerations For Upper Layer Bindings 519 An Upper Layer Protocol that operates on RPC-over-RDMA transports may 520 have procedures that include DDP-eligible data items. DDP- 521 eligibility is specified in an Upper Layer Binding. Direction of 522 operation does not obviate the need for DDP-eligibility statements. 524 Reverse-direction-only operation requires the client endpoint to 525 establish a fresh connection. The Upper Layer Binding can specify 526 appropriate RPC binding parameters for such connections. 528 Bi-directional operation occurs on an already-established connection. 529 Specification of RPC binding parameters is usually not necessary in 530 this case. 532 For bi-directional operation, other considerations may apply when 533 distinct RPC Programs share an RPC-over-RDMA transport connection 534 concurrently. Consult Section 6 of [I-D.ietf-nfsv4-rfc5666bis] for 535 details about what else may be contained in an Upper Layer Binding. 537 8. Security Considerations 539 Security considerations for operation on RPC-over-RDMA transports are 540 outlined in Section 9 of [I-D.ietf-nfsv4-rfc5666bis]. 542 9. IANA Considerations 544 This document does not require actions by IANA. 546 10. Normative References 548 [I-D.ietf-nfsv4-rfc5666bis] 549 Lever, C., Simpson, W., and T. Talpey, "Remote Direct 550 Memory Access Transport for Remote Procedure Call, Version 551 One", draft-ietf-nfsv4-rfc5666bis-10 (work in progress), 552 February 2017. 554 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 555 Requirement Levels", BCP 14, RFC 2119, March 1997. 557 [RFC5531] Thurlow, R., "RPC: Remote Procedure Call Protocol 558 Specification Version 2", RFC 5531, May 2009. 560 [RFC5661] Shepler, S., Eisler, M., and D. Noveck, "Network File 561 System (NFS) Version 4 Minor Version 1 Protocol", 562 RFC 5661, January 2010. 564 [RFC7530] Haynes, T. and D. Noveck, "Network File System (NFS) 565 Version 4 Protocol", RFC 7530, March 2015. 567 Appendix A. Acknowledgements 569 Tom Talpey was an indispensable resource, in addition to creating the 570 foundation upon which this work is based. Our warmest regards go to 571 him for his help and support. 573 Dave Noveck provided excellent review, constructive suggestions, and 574 navigational guidance throughout the process of drafting this 575 document. 577 Dai Ngo was a solid partner and collaborator. Together we 578 constructed and tested independent prototypes of the changes 579 described in this document. 581 The author wishes to thank Bill Baker for his unwavering support of 582 this work. In addition, the author gratefully acknowledges the 583 expert contributions of Karen Deitke, Chunli Zhang, Mahesh 584 Siddheshwar, Steve Wise, and Tom Tucker. 586 Special thanks go to Transport Area Director Spencer Dawkins, nfsv4 587 Working Group Chair and document shepherd Spencer Shepler, and nfsv4 588 Working Group Secretary Tom Haynes for their support. 590 Author's Address 592 Charles Lever 593 Oracle Corporation 594 1015 Granger Avenue 595 Ann Arbor, MI 48104 596 USA 598 Phone: +1 248 816 6463 599 Email: chuck.lever@oracle.com