idnits 2.17.1 draft-ietf-rddp-arch-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3667, Section 5.1 on line 14. -- Found old boilerplate from RFC 3978, Section 5.5 on line 880. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 890. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 897. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 903. ** Found boilerplate matching RFC 3978, Section 5.4, paragraph 1 (on line 871), which is fine, but *also* found old RFC 2026, Section 10.4C, paragraph 1 text on line 34. ** The document seems to lack an RFC 3978 Section 5.1 IPR Disclosure Acknowledgement -- however, there's a paragraph with a matching beginning. Boilerplate error? ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** The document seems to lack an RFC 3978 Section 5.4 Reference to BCP 78 -- however, there's a paragraph with a matching beginning. Boilerplate error? ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. ** The document uses RFC 3667 boilerplate or RFC 3978-like boilerplate instead of verbatim RFC 3978 boilerplate. After 6 May 2005, submission of drafts without verbatim RFC 3978 boilerplate is not accepted. The following non-3978 patterns matched text found in the document. That text should be removed or replaced: By submitting this Internet-Draft, I certify that any applicable patent or other IPR claims of which I am aware have been disclosed, or will be disclosed, and any of which I become aware will be disclosed, in accordance with RFC 3668. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 20 longer pages, the longest (page 16) being 73 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 20 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'DAFS' is defined on line 804, but no explicit reference was found in the text == Unused Reference: 'FCVI' is defined on line 809, but no explicit reference was found in the text == Unused Reference: 'IB' is defined on line 814, but no explicit reference was found in the text == Unused Reference: 'MYR' is defined on line 818, but no explicit reference was found in the text == Unused Reference: 'SDP' is defined on line 832, but no explicit reference was found in the text == Unused Reference: 'SRVNET' is defined on line 838, but no explicit reference was found in the text == Unused Reference: 'VI' is defined on line 842, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-ietf-rddp-problem-statement-04 -- Obsolete informational reference (is this intentional?): RFC 2960 (ref. 'SCTP') (Obsoleted by RFC 4960) Summary: 7 errors (**), 0 flaws (~~), 12 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet-Draft Stephen Bailey (Sandburst) 2 Expires: January 2005 Tom Talpey (NetApp) 4 The Architecture of Direct Data Placement (DDP) 5 and Remote Direct Memory Access (RDMA) 6 on Internet Protocols 7 draft-ietf-rddp-arch-05 9 Status of this Memo 11 By submitting this Internet-Draft, I certify that any applicable 12 patent or other IPR claims of which I am aware have been disclosed, 13 or will be disclosed, and any of which I become aware will be 14 disclosed, in accordance with RFC 3668. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other 23 documents at any time. It is inappropriate to use Internet-Drafts 24 as reference material or to cite them other than as "work in 25 progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt The list of 29 Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html 32 Copyright Notice 34 Copyright (C) The Internet Society (2004). All Rights Reserved. 36 Abstract 38 This document defines an abstract architecture for Direct Data 39 Placement (DDP) and Remote Direct Memory Access (RDMA) protocols to 40 run on Internet Protocol-suite transports. This architecture does 41 not necessarily reflect the proper way to implement such protocols, 42 but is, rather, a descriptive tool for defining and understanding 43 the protocols. DDP allows the efficient placement of data into 44 buffers designated by Upper Layer Protocols (e.g. RDMA). RDMA 45 provides the semantics to enable Remote Direct Memory Access 46 between peers in a way consistent with application requirements. 48 Table Of Contents 50 1. Introduction . . . . . . . . . . . . . . . . . . . . . . 2 51 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . 2 52 1.2. DDP and RDMA Protocols . . . . . . . . . . . . . . . . . 3 53 2. Architecture . . . . . . . . . . . . . . . . . . . . . . 4 54 2.1. Direct Data Placement (DDP) Protocol Architecture . . . 4 55 2.1.1. Transport Operations . . . . . . . . . . . . . . . . . . 6 56 2.1.2. DDP Operations . . . . . . . . . . . . . . . . . . . . . 7 57 2.1.3. Transport Characteristics in DDP . . . . . . . . . . . . 10 58 2.2. Remote Direct Memory Access Protocol Architecture . . . 12 59 2.2.1. RDMA Operations . . . . . . . . . . . . . . . . . . . . 13 60 2.2.2. Transport Characteristics in RDMA . . . . . . . . . . . 16 61 3. Security Considerations . . . . . . . . . . . . . . . . 17 62 4. IANA Considerations . . . . . . . . . . . . . . . . . . 18 63 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . 18 64 Informative References . . . . . . . . . . . . . . . . . 18 65 Authors' Addresses . . . . . . . . . . . . . . . . . . . 19 66 Full Copyright Statement . . . . . . . . . . . . . . . . 19 68 1. Introduction 70 This document defines an abstract architecture for Direct Data 71 Placement (DDP) and Remote Direct Memory Access (RDMA) protocols to 72 run on Internet Protocol-suite transports. This architecture does 73 not necessarily reflect the proper way to implement such protocols, 74 but is, rather, a descriptive tool for defining and understanding 75 the protocols. This document uses C language notation as a 76 shorthand to describe the architectural elements of DDP and RDMA 77 protocols. The choice of C notation is not intended to describe 78 concrete protocols or programming interfaces. 80 The first part of the document describes the architecture of DDP 81 protocols, including what assumptions are made about the transports 82 on which DDP is built. The second part describes the architecture 83 of RDMA protocols layered on top of DDP. 85 1.1. Terminology 87 Before introducing the protocols, certain definitions will be 88 useful to guide discussion: 90 o Placement - writing to a data buffer. 92 o Operation - a protocol message, or sequence of messages, which 93 provide a architectural semantic, such as reading or writing 94 of a data buffer. 96 o Delivery - informing any Upper Layer or application that a 97 particular message is available for use. Delivery therefore 98 may be viewed as the "control" signal associated with a unit 99 of data. Note that the order of delivery is defined more 100 strictly than it is for placement. 102 o Completion - informing any Upper Layer or application that a 103 particular operation has finished. A completion, for 104 instance, may require the delivery of several messages, or it 105 may also reflect that some local processing has finished. 107 o Data Sink - the peer on which any placement occurs. 109 o Data Source - the peer from which the placed data originates. 111 o Steering Tag - a "handle" used to identify memory which is the 112 target of placement. A "tagged" message is one which 113 references such a handle. 115 o RDMA Write - an Operation which places data from a local data 116 buffer to a remote data buffer specified by a Steering Tag. 118 o RDMA Read - an Operation which places data to a local data 119 buffer specified by a Steering Tag from a remote data buffer 120 specified by another Steering Tag. 122 o Send - an Operation which places data from a local data buffer 123 to a remote data buffer of the data sink's choice. Sends are 124 therefore "untagged". 126 1.2. DDP and RDMA Protocols 128 The goal of the DDP protocol is to allow the efficient placement of 129 data into buffers designated by protocols layered above DDP (e.g. 130 RDMA). This is described in detail in [ROM]. Efficiency may be 131 characterized by the minimization of the number of transfers of the 132 data over the receiver's system buses. 134 The goal of the RDMA protocol is to provide the semantics to enable 135 Remote Direct Memory Access between peers in a way consistent with 136 application requirements. The RDMA protocol provides facilities 137 immediately useful to existing and future networking, storage, and 138 other application protocols. [DAFS, FCVI, IB, MYR, SDP, SRVNET, 139 VI] 141 The DDP and RDMA protocols work together to achieve their 142 respective goals. DDP provides facilities to safely steer payloads 143 to specific buffers at the Data Sink. RDMA provides facilities to 144 Upper Layers for identifying these buffers, controlling the 145 transfer of data between peers' buffers, supporting authorized 146 bidirectional transfer between buffers, and signalling completion. 147 Upper Layer Protocols that do not require the features of RDMA may 148 be layered directly on top of DDP. 150 The DDP and RDMA protocols are transport independent. The 151 following figure shows the relationship between RDMA, DDP, Upper 152 Layer Protocols and Transport. 154 +--------------------------------------------------+ 155 | Upper Layer Protocol | 156 +---------+------------+---------------------------+ 157 | | | RDMA | 158 | | +---------------------------+ 159 | | DDP | 160 | +----------------------------------------+ 161 | Transport | 162 +--------------------------------------------------+ 164 2. Architecture 166 The Architecture section is presented in two parts: Direct Data 167 Placement Protocol architecture and Remote Direct Memory Access 168 Protocol architecture. 170 2.1. Direct Data Placement (DDP) Protocol Architecture 172 The central idea of general-purpose DDP is that a data sender will 173 supplement the data it sends with placement information that allows 174 the receiver's network interface to place the data directly at its 175 final destination without any copying. DDP can be used to steer 176 received data to its final destination, without requiring layer- 177 specific behavior for each different layer. Data sent with such 178 DDP information is said to be `tagged'. 180 The central component of the DDP architecture is the `buffer', 181 which is an object with beginning and ending addresses, and a 182 method (set()) to set the value of an octet at an address. In many 183 cases, a buffer corresponds directly to a portion of host user 184 memory. However, DDP does not depend on this---a buffer could be a 185 disk file, or anything else that can be viewed as an addressable 186 collection of octets. Abstractly, a buffer provides the interface: 188 typedef struct { 189 const address_t start; 190 const address_t end; 191 void set(address_t a, data_t v); 192 } ddp_buffer_t; 194 address_t 196 a reference to local memory 198 data_t 200 an octet data value. 202 The protocol layering and in-line data flow of DDP is: 204 DDP Client Protocol 205 (e.g. RDMA or Upper Layer Protocol) 206 | ^ 207 untagged messages | | untagged message delivery 208 tagged messages | | tagged message delivery 209 v | 210 DDP+---> data placement 211 ^ 212 | transport messages 213 v 214 Transport 215 (e.g. SCTP, DCCP, framed TCP) 216 ^ 217 | IP datagrams 218 v 219 . . . 221 In addition to in-line data flow, the client protocol registers 222 buffers with DDP, and DDP performs buffer update (set()) operations 223 as a result of receiving tagged messages. 225 DDP messages may be split into multiple, smaller DDP messages, each 226 in a separate transport message. However, if the transport is 227 unreliable or unordered, messages split across transport messages 228 may or may not provide useful behavior, in the same way as 229 splitting arbitrary Upper Layer messages across unreliable or 230 unordered transport messages may or may not provide useful 231 behavior. In other words, the same considerations apply to 232 building client protocols on different types of transports with or 233 without the use of DDP. 235 A DDP message split across transport messages looks like: 237 DDP message: Transport messages: 239 stag=s, offset=o, message 1: 240 notify=y, id=i |type=ddp | 241 message= |stag=s | 242 |aabbccddee|-------. |offset=o | 243 ~ ... ~----. \ |notify=n | 244 |vvwwxxyyzz|-. \ \ |id=? | 245 | \ `--->|aabbccddee| 246 | \ ~ ... ~ 247 | +----->|iijjkkllmm| 248 | | 249 + | message 2: 250 \ | |type=ddp | 251 \ | |stag=s | 252 \ + |offset=o+n| 253 \ \ |notify=y | 254 \ \ |id=i | 255 \ `-->|nnooppqqrr| 256 \ ~ ... ~ 257 `---->|vvwwxxyyzz| 259 Although this picture suggests that DDP information is carried in- 260 line with the message payload, components of the DDP information 261 may also be in transport-specific fields, or derived from 262 transport-specific control information if the transport permits. 264 2.1.1. Transport Operations 266 For the purposes of this architecture, the transport provides: 268 void xpt_send(socket_t s, message_t m); 269 message_t xpt_recv(socket_t s); 270 msize_t xpt_max_msize(socket_t s); 272 socket_t 274 a transport address, including IP addresses, ports and other 275 transport-specific identifiers. 277 message_t 279 a string of octets. 281 msize_t (scalar) 283 a message size. 285 xpt_send(socket_t s, message_t m) 287 send a transport message. 289 xpt_recv(socket_t s) 291 receive a transport message. 293 xpt_max_msize(socket_t s) 295 get the current maximum transport message size. Corresponds, 296 roughly, to the current path Maximum Transfer Unit (PMTU), 297 adjusted by underlying protocol overheads. 299 Real implementations of xpt_send() and xpt_recv() typically return 300 error indications, but that is not relevant to this architecture. 302 2.1.2. DDP Operations 304 The DDP layer provides: 306 void ddp_send(socket_t s, message_t m); 307 void ddp_send_ddp(socket_t s, message_t m, ddp_addr_t d, 308 ddp_notify_t n); 309 void ddp_post_recv(socket_t s, bdesc_t b); 310 ddp_ind_t ddp_recv(socket_t s); 311 bdesc_t ddp_register(socket_t s, ddp_buffer_t b); 312 void ddp_deregister(bhand_t bh); 313 msizes_t ddp_max_msizes(socket_t s); 315 ddp_addr_t 317 the buffer address portion of a tagged message: 319 typedef struct { 320 stag_t stag; 321 address_t offset; 322 } ddp_addr_t; 324 stag_t (scalar) 325 a Steering Tag. A stag_t identifies the destination buffer 326 for tagged messages. stag_ts are generated when the buffer is 327 registered, communicated to the sender by some client protocol 328 convention and inserted in DDP messages. stag_t values in 329 this DDP architecture are assumed to be completely opaque to 330 the client protocol, and implementation-dependent. However, 331 particular implementations, such as DDP on a multicast 332 transport (see below), may provide the buffer holder some 333 control in selecting stag_ts. 335 ddp_notify_t 337 the notification portion of a DDP message, used to signal that 338 the message represents the final fragment of a multi-segmented 339 DDP message: 341 typedef struct { 342 boolean_t notify; 343 ddp_msg_id_t i; 344 } ddp_notify_t; 346 ddp_msg_id_t (scalar) 348 a DDP message identifier. msg_id_ts are chosen by the DDP 349 message receiver (buffer holder), communicated to the sender 350 by some client protocol convention and inserted in DDP 351 messages. Whether a message reception indication is requested 352 for a DDP message is a matter of client protocol convention. 353 Unlike stag_ts, the structure of msg_id_ts is opaque to DDP, 354 and therefore, completely in the hands of the client protocol. 356 bdesc_t 358 a description of a registered buffer: 360 typedef struct { 361 bhand_t bh; 362 ddp_addr_t a; 363 } bdesc_t; 365 `a.offset' is the starting offset of the registered buffer, 366 which may have no relationship to the `start' or `end' 367 addresses of that buffer. However, particular 368 implementations, such as DDP on a multicast transport (see 369 below), may allow some client protocol control over the 370 starting offset. 372 bhand_t 374 an opaque buffer handle used to deregister a buffer. 376 recv_message_t 378 a description of a completed untagged receive buffer: 380 typedef struct { 381 bdesc_t b; 382 length_t l; 383 } recv_message_t; 385 ddp_ind_t 387 an untagged message, a tagged message reception indication, or 388 a tagged message reception error: 390 typedef union { 391 recv_message_t m; 392 ddp_msg_id_t i; 393 ddp_err_t e; 394 } ddp_ind_t; 396 ddp_err_t 398 indicates an error while receiving a tagged message, typically 399 `offset' out of bounds, or `stag' is not registered to the 400 socket. 402 msizes_t 404 The maximum untagged and tagged messages that fit in a single 405 transport message: 407 typedef struct { 408 msize_t max_untagged; 409 msize_t max_tagged; 410 } msizes_t; 412 ddp_send(socket_t s, message_t m) 414 send an untagged message. 416 ddp_send_ddp(socket_t s, message_t m, ddp_addr_t d, ddp_notify_t n) 417 send a tagged message to remote buffer address d. 419 ddp_post_recv(socket_t s, bdesc_t b) 421 post a registered buffer to accept a single received untagged 422 message. Each buffer is returned to the caller in a 423 ddp_recv() untagged message reception indication, in the order 424 in which it was posted. The same buffer may be enabled on 425 multiple sockets, receipt of an untagged message into the 426 buffer from any of these sockets unposts the buffer from all 427 sockets. 429 ddp_recv(socket_t s) 431 get the next received untagged message, tagged message 432 reception indication, or tagged message error. 434 ddp_register(socket_t s, ddp_buffer_t b) 436 register a buffer for DDP on a socket. The same buffer may be 437 registered multiple times on the same or different sockets. 438 The same buffer registered on different sockets may result in 439 a common registration. Different buffers may also refer to 440 portions of the same underlying addressable object (buffer 441 aliasing). 443 ddp_deregister(bhand_t bh) 445 remove a registration from a buffer. 447 ddp_max_msizes(socket_t s) 449 get the current maximum untagged and tagged message sizes that 450 will fit in a single transport message. 452 2.1.3. Transport Characteristics In DDP 454 Certain characteristics of the transport on which DDP is mapped 455 determine the nature of the service provided to client protocols. 456 Fundamentally, the characteristics of the transport will not be 457 changed by the presence of DDP. The choice of transport is 458 therefore driven not by DDP, but by the requirements of the Upper 459 Layer, and employing the DDP service. 461 Specifically, transports are: 463 o reliable or unreliable, 464 o ordered or unordered, 466 o single source or multisource, 468 o single destination or multidestination (multicast or anycast). 470 Some transports support several combinations of these 471 characteristics. For example, SCTP [SCTP] is reliable, single 472 source, single destination (point-to-point) and supports both 473 ordered and unordered modes. 475 DDP messages carried by transport are framed for processing by the 476 receiver, and may be further protected for integrity or privacy in 477 accordance with the transport capabilities. DDP does not provide 478 such functions. 480 In general, transport characteristics equally affect transport and 481 DDP message delivery. However, there are several issues specific 482 to DDP messages. 484 A key component of DDP is how the following operations on the 485 receiving side are ordered among themselves, and how they relate to 486 corresponding operations on the sending side: 488 o set()s, 490 o untagged message reception indications, and 492 o tagged message reception indications. 494 These relationships depend upon the characteristics of the 495 underlying transport in a way which is defined by the DDP protocol. 496 For example, if the transport is unreliable and unordered, the DDP 497 protocol might specify that the client protocol is subject to the 498 consequences of transport messages being lost or duplicated, rather 499 than requiring different characteristics be presented to the client 500 protocol. 502 Multidestination data delivery is the other transport 503 characteristic which may require specific consideration in a DDP 504 protocol. As mentioned above, the basic DDP model assumes that 505 buffer address values returned by ddp_register() are opaque to the 506 client protocol, and can be implementation dependent. The most 507 natural way to map DDP to a multidestination transport is to 508 require all receivers produce the same buffer address when 509 registering a multidestination destination buffer. Restriction of 510 the DDP model to accommodate multiple destinations involves 511 engineering tradeoffs comparable to those of providing non-DDP 512 multidestination transport capability. 514 The same buffer may be enabled by ddp_post_recv() on multiple 515 sockets. In this case the ddp_recv() untagged message reception 516 indication may be provided on a different socket from that on which 517 the buffer was posted. Such indications are not ordered among 518 multiple DDP sockets. 520 When multiple sockets reference an untagged message reception 521 buffer, local interfaces are responsible for managing the 522 mechanisms of allocating posted buffers to received untagged 523 messages, the handling of received untagged messages when no buffer 524 is available, and of resource management among multiple sockets. 525 Where underprovisioning of buffers on multiple sockets is allowed, 526 mechanisms should be provided to manage buffer consumption on a 527 per-socket or group of related sockets basis. 529 Architecturally, therefore, DDP is a flexible and general paradigm 530 which may be applied to any variety of transports. Implementations 531 of DDP may, however, adapt themselves to these differences in ways 532 appropriate to each transport. In all cases the layering of DDP 533 must continue to express the transport's underlying 534 characteristics. 536 2.2. Remote Direct Memory Access (RDMA) Protocol Architecture 538 Remote Direct Memory Access (RDMA) extends the capabilities of DDP 539 with two primary functions. 541 First, it adds the ability to read from buffers registered to a 542 socket (RDMA Read). This allows a client protocol to perform 543 arbitrary, bidirectional data movement without involving the remote 544 client. When RDMA is implemented in hardware, arbitrary data 545 movement can be performed without involving the remote host CPU at 546 all. 548 In addition, RDMA specifies a transport-independent untagged 549 message service (Send) with characteristics which are both very 550 efficient to implement in hardware, and convenient for client 551 protocols. 553 The RDMA architecture is patterned after the traditional model for 554 device programming, where the client requests an operation using 555 Send-like actions (programmed I/O), the server performs the 556 necessary data transfers for the operation (DMA reads and writes), 557 and notifies the client of completion. The programmed I/O+DMA 558 model efficiently supports a high degree of concurrency and 559 flexibility for both the client and server, even when operations 560 have a wide range of intrinsic latencies. 562 RDMA is layered as a client protocol on top of DDP: 564 Client Protocol 565 | ^ 566 Sends | | Send reception indications 567 RDMA Read Requests | | RDMA Read Completion indications 568 RDMA Writes | | RDMA Write Completion indications 569 v | 570 RDMA 571 | ^ 572 untagged messages | | untagged message delivery 573 tagged messages | | tagged message delivery 574 v | 575 DDP+---> data placement 576 ^ 577 | transport messages 578 v 579 . . . 581 In addition to in-line data flow, read (get()) and update (set()) 582 operations are performed on buffers registered with RDMA as a 583 result of RDMA Read Requests and RDMA Writes, respectively. 585 An RDMA `buffer' extends a DDP buffer with a get() operation that 586 retrieves the value of the octet at address `a': 588 typedef struct { 589 const address_t start; 590 const address_t end; 591 void set(address_t a, data_t v); 592 data_t get(address_t a); 593 } rdma_buffer_t; 595 2.2.1. RDMA Operations 597 The RDMA layer provides: 599 void rdma_send(socket_t s, message_t m); 600 void rdma_write(socket_t s, message_t m, ddp_addr_t d, 601 rdma_notify_t n); 602 void rdma_read(socket_t s, ddp_addr_t s, ddp_addr_t d); 603 void rdma_post_recv(socket_t s, bdesc_t b); 604 rdma_ind_t rdma_recv(socket_t s); 605 bdesc_t rdma_register(socket_t s, rdma_buffer_t b, 606 bmode_t mode); 607 void rdma_deregister(bhand_t bh); 608 msizes_t rdma_max_msizes(socket_t s); 610 Although, for clarity, these data transfer interfaces are 611 synchronous, rdma_read() and possibly rdma_send() (in the presence 612 of Send flow control), can require an arbitrary amount of time to 613 complete. To express the full concurrency and interleaving of RDMA 614 data transfer, these interfaces should also be reentrant. For 615 example, a client protocol may perform an rdma_send(), while an 616 rdma_read() operation is in progress. 618 rdma_notify_t 620 RDMA Write notification information, used to signal that the 621 message represents the final fragment of a multi-segmented 622 RDMA message: 624 typedef struct { 625 boolean_t notify; 626 rdma_write_id_t i; 627 } rdma_notify_t; 629 identical in function to ddp_notify_t, except that the type 630 rdma_write_id_t may not be equivalent to ddp_msg_id_t. 632 rdma_write_id_t (scalar) 634 an RDMA Write identifier. 636 rdma_ind_t 638 a Send message, or an RDMA error: 640 typedef union { 641 recv_message_t m; 642 rdma_err_t e; 643 } rdma_ind_t; 645 rdma_err_t 647 an RDMA protocol error indication. RDMA errors include buffer 648 addressing errors corresponding to ddp_err_ts, and buffer 649 protection violations (e.g. RDMA Writing a buffer only 650 registered for reading). 652 bmode_t 654 buffer registration mode (permissions). Any combination of 655 permitting RDMA Read (BMODE_READ) and RDMA Write (BMODE_WRITE) 656 operations. 658 rdma_send(socket_t s, message_t m) 660 send a message, delivering it to the next untagged RDMA buffer 661 at the remote peer. 663 rdma_write(socket_t s, message_t m, ddp_addr_t d, rdma_notify_t n) 665 RDMA Write to remote buffer address d. 667 rdma_read(socket_t s, ddp_addr_t s, length_t l, ddp_addr_t d) 669 RDMA Read l octets from remote buffer address s to local 670 buffer address d. 672 rdma_post_recv(socket_t s, bdesc_t b) 674 post a registered buffer to accept a single Send message, to 675 be filled and returned in-order to a subsequent caller of 676 rdma_recv(). As with DDP, buffers may be enabled on multiple 677 sockets, in which case ordering guarantees are relaxed. Also 678 as with DDP, local interfaces must manage the mechanisms of 679 allocation and management of buffers posted to multiple 680 sockets. 682 rdma_recv(socket_t s); 684 get the next received Send message, RDMA Write completion 685 identifier, or RDMA error. 687 rdma_register(socket_t s, rdma_buffer_t b, bmode_t mode) 689 register a buffer for RDMA on a socket (for read access, write 690 access or both). As with DDP, the same buffer may be 691 registered multiple times on the same or different sockets, 692 and different buffers may refer to portions of the same 693 underlying addressable object. 695 rdma_deregister(bhand_t bh) 697 remove a registration from a buffer. 699 rdma_max_msizes(socket_t s) 701 get the current maximum Send (max_untagged) and RDMA Read or 702 Write (max_tagged) operations that will fit in a single 703 transport message. The values returned by rdma_max_msizes() 704 are closely related to the values returned by 705 ddp_max_msizes(), but may not be equal. 707 2.2.2. Transport Characteristics In RDMA 709 As with DDP, RDMA can be used on transports with a variety of 710 different characteristics that manifest themselves directly in the 711 service provided by RDMA. Also as with DDP, the fundamental 712 characteristics of the transport will not be changed by the 713 presence of RDMA. 715 Like DDP, an RDMA protocol must specify how: 717 o set()s, 719 o get()s, 721 o Send messages, and 723 o RDMA Read completions 725 are ordered among themselves and how they relate to corresponding 726 operations on the remote peer(s). These relationships are likely 727 to be a function of the underlying transport characteristics. 729 There are some additional characteristics of RDMA which may 730 translate poorly to unreliable or multipoint transports due to 731 attendant complexities in managing endpoint state: 733 o Send flow control 735 o RDMA Read 737 These difficulties can be overcome by placing restrictions on the 738 service provided by RDMA. However, many RDMA clients, especially 739 those that separate data transfer and application logic concerns, 740 are likely to depend upon capabilities only provided by RDMA on a 741 point-to-point, reliable transport. In other words, many potential 742 Upper Layers which might avail themselves of RDMA services are 743 naturally already biased toward these transport classes. 745 3. Security Considerations 747 Fundamentally, the DDP and RDMA protocols should not introduce 748 additional vulnerabilities. They are intermediate protocols and so 749 should not perform or require functions such as authorization, 750 which are the domain of Upper Layers. However, the DDP and RDMA 751 protocols should allow mapping by strict Upper Layers which are not 752 permissive of new vulnerabilities -- DDP and RDMAP implementations 753 should be prohibited from `cutting corners' that create new 754 vulnerabilities. Implementations must ensure that only `supplied' 755 resources (i.e. buffers) can be manipulated by DDP or RDMAP 756 messages. 758 System integrity must be maintained in any RDMA solution. 759 Mechanisms must be specified to prevent RDMA or DDP operations from 760 impairing system integrity. For example, threats can include 761 potential buffer reuse or buffer overflow, and are not merely a 762 security issue. Even trusted peers must not be allowed to damage 763 local integrity. Any DDP and RDMA protocol must address the issue 764 of giving end-systems and applications the capabilities to offer 765 protection from such compromises. 767 Because a Steering Tag exports access to a memory region, one 768 critical aspect of security is the scope of this access. It must 769 be possible to individually control specific attributes of the 770 access provided by a Steering Tag, including remote read access, 771 remote write access, and others that might be identified. DDP and 772 RDMA specifications must provide both implementation requirements 773 relevant to this issue, and guidelines to assist implementors in 774 making the appropriate design decisions. 776 The use of DDP and RDMA on a transport connection may interact with 777 any security mechanism, and vice-versa. For example, if the 778 security mechanism is implemented above the transport layer, the 779 DDP and RDMA headers may not be protected. Such a layering may 780 therefore be inappropriate, depending on requirements. Or, when 781 TLS is employed, it may not be possible for DDP and RDMA to process 782 segments out of order, due to the in-order requirement of TLS. 783 These interactions should be well explored. 785 Resource issues leading to denial-of-service attacks, overwrites 786 and other concurrent operations, the ordering of completions as 787 required by the RDMA protocol, and the granularity of transfer are 788 all within the required scope of any security analysis of RDMA and 789 DDP. 791 4. IANA Considerations 793 IANA considerations are not addressed in by this document. Any 794 IANA considerations resulting from the use of DDP or RDMA must be 795 addressed in the relevant standards. 797 5. Acknowledgements 799 The authors wish to acknowledge the valuable contributions of 800 Caitlin Bestler, David Black, Jeff Mogul and Allyn Romanow. 802 6. Informative References 804 [DAFS] 805 DAFS Collaborative, "Direct Access File System Specification 806 v1.0", September 2001, available from 807 http://www.dafscollaborative.org 809 [FCVI] 810 ANSI Technical Committee T11, "Fibre Channel Standard Virtual 811 Interface Architecture Mapping", ANSI/NCITS 357-2001, March 812 2001, available from http://www.t11.org/t11/stat.nsf/fcproj 814 [IB] InfiniBand Trade Association, "InfiniBand Architecture 815 Specification Volumes 1 and 2", Release 1.1, November 2002, 816 available from http://www.infinibandta.org/specs 818 [MYR] 819 VMEbus International Trade Association, "Myrinet on VME 820 Protocol Specification", ANSI/VITA 26-1998, August 1998, 821 available from http://www.myri.com/open-specs 823 [ROM] 824 A. Romanow, J. Mogul, T. Talpey and S. Bailey, "RDMA over IP 825 Problem Statement", draft-ietf-rddp-problem-statement-04, Work 826 in Progress, July 2004 828 [SCTP] 829 R. Stewart et al., "Stream Transmission Control Protocol", RFC 830 2960, Standards Track 832 [SDP] 833 InfiniBand Trade Association, "Sockets Direct Protocol v1.0", 834 Annex A of InfiniBand Architecture Specification Volume 1, 835 Release 1.1, November 2002, available from 836 http://www.infinibandta.org/specs 838 [SRVNET] 839 R. Horst, "TNet: A reliable system area network", IEEE Micro, 840 pp. 37-45, February 1995 842 [VI] Compaq Computer Corp., Intel Corporation and Microsoft 843 Corporation, "Virtual Interface Architecture Specification 844 Version 1.0", December 1997, available from 845 http://www.vidf.org/info/04standards.html 847 Authors' Addresses 849 Stephen Bailey 850 Sandburst Corporation 851 600 Federal Street 852 Andover, MA 01810 USA 853 USA 855 Phone: +1 978 689 1614 856 Email: steph@sandburst.com 858 Tom Talpey 859 Network Appliance 860 375 Totten Pond Road 861 Waltham, MA 02451 USA 863 Phone: +1 781 768 5329 864 Email: thomas.talpey@netapp.com 866 Full Copyright Statement 868 Copyright (C) The Internet Society (2004). This document is 869 subject to the rights, licenses and restrictions contained in BCP 870 78 and except as set forth therein, the authors retain all their 871 rights. 873 This document and the information contained herein are provided on 874 an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 875 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND 876 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, 877 EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT 878 THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR 879 ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A 880 PARTICULAR PURPOSE. 882 Intellectual Property 883 The IETF takes no position regarding the validity or scope of any 884 Intellectual Property Rights or other rights that might be claimed 885 to pertain to the implementation or use of the technology described 886 in this document or the extent to which any license under such 887 rights might or might not be available; nor does it represent that 888 it has made any independent effort to identify any such rights. 889 Information on the procedures with respect to rights in RFC 890 documents can be found in BCP 78 and BCP 79. 892 Copies of IPR disclosures made to the IETF Secretariat and any 893 assurances of licenses to be made available, or the result of an 894 attempt made to obtain a general license or permission for the use 895 of such proprietary rights by implementers or users of this 896 specification can be obtained from the IETF on-line IPR repository 897 at http://www.ietf.org/ipr. 899 The IETF invites any interested party to bring to its attention any 900 copyrights, patents or patent applications, or other proprietary 901 rights that may cover technology that may be required to implement 902 this standard. Please address the information to the IETF at ietf- 903 ipr@ietf.org. 905 Acknowledgement 906 Funding for the RFC Editor function is currently provided by the 907 Internet Society.