idnits 2.17.1 draft-ietf-rddp-arch-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'DAFS' is defined on line 730, but no explicit reference was found in the text == Unused Reference: 'FCVI' is defined on line 735, but no explicit reference was found in the text == Unused Reference: 'IB' is defined on line 740, but no explicit reference was found in the text == Unused Reference: 'MYR' is defined on line 744, but no explicit reference was found in the text == Unused Reference: 'SDP' is defined on line 760, but no explicit reference was found in the text == Unused Reference: 'SRVNET' is defined on line 766, but no explicit reference was found in the text == Unused Reference: 'VI' is defined on line 770, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-ietf-rddp-problem-statement-02 -- Obsolete informational reference (is this intentional?): RFC 2960 (ref. 'SCTP') (Obsoleted by RFC 4960) Summary: 1 error (**), 0 flaws (~~), 10 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet-Draft Stephen Bailey (Sandburst) 3 Expires: March 2004 Tom Talpey (NetApp) 5 The Architecture of Direct Data Placement (DDP) 6 and Remote Direct Memory Access (RDMA) 7 on Internet Protocols 8 draft-ietf-rddp-arch-03 10 Status of this Memo 12 This document is an Internet-Draft and is in full conformance with 13 all provisions of Section 10 of RFC2026. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six 21 months and may be updated, replaced, or obsoleted by other 22 documents at any time. It is inappropriate to use Internet-Drafts 23 as reference material or to cite them other than as "work in 24 progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 Copyright Notice 34 Copyright (C) The Internet Society (2003). All Rights Reserved. 36 Abstract 38 This document defines an abstract architecture for Direct Data 39 Placement (DDP) and Remote Direct Memory Access (RDMA) protocols to 40 run on Internet Protocol-suite transports. This architecture does 41 not necessarily reflect the proper way to implement such protocols, 42 but is, rather, a descriptive tool for defining and understanding 43 the protocols. DDP allows the efficient placement of data into 44 buffers designated by Upper Layer Protocols (e.g. RDMA). RDMA 45 provides the semantics to enable Remote Direct Memory Access 46 between peers in a way consistent with application requirements. 48 Table Of Contents 50 1. Introduction . . . . . . . . . . . . . . . . . . . . . . 2 51 2. Architecture . . . . . . . . . . . . . . . . . . . . . . 3 52 2.1. Direct Data Placement (DDP) Protocol Architecture . . . 3 53 2.1.1. Transport Operations . . . . . . . . . . . . . . . . . . 5 54 2.1.2. DDP Operations . . . . . . . . . . . . . . . . . . . . . 6 55 2.1.3. Transport Characteristics in DDP . . . . . . . . . . . . 10 56 2.2. Remote Direct Memory Access Protocol Architecture . . . 11 57 2.2.1. RDMA Operations . . . . . . . . . . . . . . . . . . . . 12 58 2.2.2. Transport Characteristics in RDMA . . . . . . . . . . . 15 59 3. Security Considerations . . . . . . . . . . . . . . . . 16 60 4. IANA Considerations . . . . . . . . . . . . . . . . . . 16 61 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . 16 62 Informative References . . . . . . . . . . . . . . . . . 16 63 Authors' Addresses . . . . . . . . . . . . . . . . . . . 17 64 Full Copyright Statement . . . . . . . . . . . . . . . . 18 66 1. Introduction 68 This document defines an abstract architecture for Direct Data 69 Placement (DDP) and Remote Direct Memory Access (RDMA) protocols to 70 run on Internet Protocol-suite transports. This architecture does 71 not necessarily reflect the proper way to implement such protocols, 72 but is, rather, a descriptive tool for defining and understanding 73 the protocols. 75 The first part of the document describes the architecture of DDP 76 protocols, including what assumptions are made about the transports 77 on which DDP is built. The second part describes the architecture 78 of RDMA protocols layered on top of DDP. 80 Before introducing the protocols, three definitions will be useful 81 to guide discussion: 83 o Placement - writing to a data buffer. 85 o Delivery - informing the Upper Layer Protocol (ULP) (e.g. 86 RDMA) that a particular message is available for use. 87 Delivery therefore may be viewed as the "control" signal 88 associated with a unit of data. Note that the order of 89 delivery is defined more strictly than it is for placement. 91 o Completion - informing the ULP or application that a 92 particular RDMA operation has finished. A completion, for 93 instance, may require the delivery of several messages, or it 94 may also reflect that some local processing has finished. 96 The goal of the DDP protocol is to allow the efficient placement of 97 data into buffers designated by Upper Layer Protocols (e.g. RDMA). 98 This is described in detail in [ROM]. Efficiency may be 99 characterized by the minimization of the number of transfers of the 100 data over the receiver's system buses. 102 The goal of the RDMA protocol is to provide the semantics to enable 103 Remote Direct Memory Access between peers in a way consistent with 104 application requirements. The RDMA protocol provides facilities 105 immediately useful to existing and future networking, storage, and 106 other application protocols. [DAFS, FCVI, IB, MYR, SDP, SRVNET, 107 VI] 109 The DDP and RDMA protocols work together to achieve their 110 respective goals. DDP provides facilities to safely steer payloads 111 to specific buffers at the Data Sink. RDMA provides facilities to 112 a ULP for identifying these buffers, controlling the transfer of 113 data between ULP peers, and signalling completion to the ULP. ULPs 114 that do not require the features of RDMA may be layered directly on 115 top of DDP. 117 The DDP and RDMA protocols are transport independent. The 118 following figure shows the relationship between RDMA, DDP, Upper 119 Layer Protocols and Transport. 121 +---------------------------------------------------+ 122 | ULP | 123 +---------+------------+----------------------------+ 124 | | | RDMA | 125 | | +----------------------------+ 126 | | DDP | 127 | +-----------------------------------------+ 128 | Transport | 129 +---------------------------------------------------+ 131 2. Architecture 133 The Architecture section is presented in two parts: Direct Data 134 Placement Protocol architecture and Remote Direct Memory Access 135 Protocol architecture. 137 2.1. Direct Data Placement (DDP) Protocol Architecture 139 The central idea of general-purpose DDP is that a data sender will 140 supplement the data it sends with placement information that allows 141 the receiver's network interface to place the data directly at its 142 final destination without any copying. DDP can be used to steer 143 received data to its final destination, without requiring layer- 144 specific behavior for each different layer. Data sent with such 145 DDP information is said to be `tagged'. 147 The central component of the DDP architecture is the `buffer', 148 which is an object with beginning and ending addresses, and a 149 method (set()) to set the value of an octet at an address. In many 150 cases, a buffer corresponds directly to a portion of host user 151 memory. However, DDP does not depend on this---a buffer could be a 152 disk file, or anything else that can be viewed as an addressable 153 collection of octets. Abstractly, a buffer provides the interface: 155 typedef struct { 156 const address_t start; 157 const address_t end; 158 void set(address_t a, data_t v); 159 } ddp_buffer_t; 161 address_t 163 a reference to local memory 165 data_t 167 an octet data value. 169 The protocol layering and in-line data flow of DDP is: 171 Client Protocol 172 (e.g. ULP or RDMA) 173 | ^ 174 untagged messages | | untagged message delivery 175 tagged messages | | tagged message delivery 176 v | 177 DDP+---> data placement 178 ^ 179 | transport messages 180 v 181 Transport 182 (e.g. SCTP, DCP, framed TCP) 183 ^ 184 | IP datagrams 185 v 186 . . . 188 In addition to in-line data flow, the client protocol registers 189 buffers with DDP, and DDP performs buffer update (set()) operations 190 as a result of receiving tagged messages. 192 DDP messages may be split into multiple, smaller DDP messages, each 193 in a separate transport message. However, if the transport is 194 unreliable or unordered, messages split across transport messages 195 may or may not provide useful behavior, in the same way as 196 splitting arbitrary upper layer messages across unreliable or 197 unordered transport messages may or may not provide useful 198 behavior. In other words, the same considerations apply to 199 building client protocols on different types of transports with or 200 without the use of DDP. 202 A DDP message split across transport messages looks like: 204 DDP message: Transport messages: 206 stag=s, offset=o, message 1: 207 notify=y, id=i |type=ddp | 208 message= |stag=s | 209 |aabbccddee|-------. |offset=o | 210 ~ ... ~----. \ |notify=n | 211 |vvwwxxyyzz|-. \ \ |id=? | 212 | \ `--->|aabbccddee| 213 | \ ~ ... ~ 214 | +----->|iijjkkllmm| 215 | | 216 + | message 2: 217 \ | |type=ddp | 218 \ | |stag=s | 219 \ + |offset=o+n| 220 \ \ |notify=y | 221 \ \ |id=i | 222 \ `-->|nnooppqqrr| 223 \ ~ ... ~ 224 `---->|vvwwxxyyzz| 226 Although this picture suggests that DDP information is carried in- 227 line with the message payload, components of the DDP information 228 may also be in transport-specific fields, or derived from 229 transport-specific control information if the transport permits. 231 2.1.1. Transport Operations 233 For the purposes of this architecture, the transport provides: 235 void xpt_send(socket_t s, message_t m); 236 message_t xpt_recv(socket_t s); 237 msize_t xpt_max_msize(socket_t s); 239 socket_t 241 a transport address, including IP addresses, ports and other 242 transport-specific identifiers. 244 message_t 246 a string of octets. 248 msize_t (scalar) 250 a message size. 252 xpt_send(socket_t s, message_t m) 254 send a transport message. 256 xpt_recv(socket_t s) 258 receive a transport message. 260 xpt_max_msize(socket_t s) 262 get the current maximum transport message size. Corresponds, 263 roughly, to the current path Maximum Transfer Unit (PMTU), 264 adjusted by underlying protocol overheads. 266 Real implementations of xpt_send() and xpt_recv() typically return 267 error indications, but that is not relevant to this architecture. 269 2.1.2. DDP Operations 271 The DDP layer provides: 273 void ddp_send(socket_t s, message_t m); 274 void ddp_send_ddp(socket_t s, message_t m, ddp_addr_t d, 275 ddp_notify_t n); 276 void ddp_post_recv(socket_t s, bdesc_t b); 277 ddp_ind_t ddp_recv(socket_t s); 278 bdesc_t ddp_register(socket_t s, ddp_buffer_t b); 279 void ddp_deregister(bhand_t bh); 280 msizes_t ddp_max_msizes(socket_t s); 282 ddp_addr_t 284 the buffer address portion of a tagged message: 286 typedef struct { 287 stag_t stag; 288 address_t offset; 289 } ddp_addr_t; 291 stag_t (scalar) 293 a Steering Tag. A stag_t identifies the destination buffer 294 for tagged messages. stag_ts are generated when the buffer is 295 registered, communicated to the sender by some client protocol 296 convention and inserted in DDP messages. stag_t values in 297 this DDP architecture are assumed to be completely opaque to 298 the client protocol, and implementation-dependent. However, 299 particular implementations, such as DDP on a multicast 300 transport (see below), may provide the buffer holder some 301 control in selecting stag_ts. 303 ddp_notify_t 305 the notification portion of a DDP message, used to signal that 306 the message represents the final fragment of a multi-segmented 307 DDP message: 309 typedef struct { 310 boolean_t notify; 311 ddp_msg_id_t i; 312 } ddp_notify_t; 314 ddp_msg_id_t (scalar) 316 a DDP message identifier. msg_id_ts are chosen by the DDP 317 message receiver (buffer holder), communicated to the sender 318 by some client protocol convention and inserted in DDP 319 messages. Whether a message reception indication is requested 320 for a DDP message is a matter of client protocol convention. 321 Unlike stag_ts, the structure of msg_id_ts is opaque to DDP, 322 and therefore, completely in the hands of the client protocol. 324 bdesc_t 326 a description of a registered buffer: 328 typedef struct { 329 bhand_t bh; 330 ddp_addr_t a; 331 } bdesc_t; 333 `a.offset' is the starting offset of the registered buffer, 334 which may have no relationship to the `start' or `end' 335 addresses of that buffer. However, particular 336 implementations, such as DDP on a multicast transport (see 337 below), may allow some client protocol control over the 338 starting offset. 340 bhand_t 342 an opaque buffer handle used to deregister a buffer. 344 recv_message_t 346 a description of a completed untagged receive buffer: 348 typedef struct { 349 bdesc_t b; 350 length l; 351 } recv_message_t; 353 ddp_ind_t 355 an untagged message, a tagged message reception indication, or 356 a tagged message reception error: 358 typedef union { 359 recv_message_t m; 360 ddp_msg_id_t i; 361 ddp_err_t e; 362 } ddp_ind_t; 364 ddp_err_t 366 indicates an error while receiving a tagged message, typically 367 `offset' out of bounds, or `stag' is not registered to the 368 socket. 370 msizes_t 371 The maximum untagged and tagged messages that fit in a single 372 transport message: 374 typedef struct { 375 msize_t max_untagged; 376 msize_t max_tagged; 377 } msizes_t; 379 ddp_send(socket_t s, message_t m) 381 send an untagged message. 383 ddp_send_ddp(socket_t s, message_t m, ddp_addr_t d, ddp_notify_t n) 385 send a tagged message to remote buffer address d. 387 ddp_post_recv(socket_t s, bdesc_t b) 389 post a registered buffer to accept a single received untagged 390 message. Each buffer is returned to the caller in a 391 ddp_recv() untagged message reception indication, in the order 392 in which it was posted. The same buffer may be enabled on 393 multiple sockets, receipt of an untagged message into the 394 buffer from any of these sockets unposts the buffer from all 395 sockets. 397 ddp_recv(socket_t s) 399 get the next received untagged message, tagged message 400 reception indication, or tagged message error. 402 ddp_register(socket_t s, ddp_buffer_t b) 404 register a buffer for DDP on a socket. The same buffer may be 405 registered multiple times on the same or different sockets. 406 The same buffer registered on different sockets may result in 407 a common registration. Different buffers may also refer to 408 portions of the same underlying addressable object (buffer 409 aliasing). 411 ddp_deregister(bhand_t bh) 413 remove a registration from a buffer. 415 ddp_max_msizes(socket_t s) 416 get the current maximum untagged and tagged message sizes that 417 will fit in a single transport message. 419 2.1.3. Transport Characteristics In DDP 421 Certain characteristics of the transport on which DDP is mapped 422 determine the nature of the service provided to client protocols. 423 Specifically, transports are: 425 o reliable or unreliable, 427 o ordered or unordered, 429 o single source or multisource, 431 o single destination or multidestination (multicast or anycast). 433 Some transports support several combinations of these 434 characteristics. For example, SCTP [SCTP] is reliable, single 435 source, single destination (point-to-point) and supports both 436 ordered and unordered modes. 438 DDP messages carried by transport are framed for processing by the 439 receiver, and may be further protected for integrity or privacy in 440 accordance with the transport capabilities. DDP does not provide 441 such functions. 443 In general, transport characteristics equally affect transport and 444 DDP message delivery. However, there are several issues specific 445 to DDP messages. 447 A key component of DDP is how the following operations on the 448 receiving side are ordered among themselves, and how they relate to 449 corresponding operations on the sending side: 451 o set()s, 453 o untagged message reception indications, and 455 o tagged message reception indications. 457 These relationships depend upon the characteristics of the 458 underlying transport in a way which is defined by the DDP protocol. 459 For example, if the transport is unreliable and unordered, the DDP 460 protocol might specify that the client protocol is subject to the 461 consequences of transport messages being lost or duplicated, rather 462 than requiring different characteristics be presented to the client 463 protocol. 465 Multidestination data delivery is the other transport 466 characteristic which may require specific consideration in a DDP 467 protocol. As mentioned above, the basic DDP model assumes that 468 buffer address values returned by ddp_register() are opaque to the 469 client protocol, and can be implementation dependent. The most 470 natural way to map DDP to a multidestination transport is to 471 require all receivers produce the same buffer address when 472 registering a multidestination destination buffer. Restriction of 473 the DDP model to accommodate multiple destinations involves 474 engineering tradeoffs comparable to those of providing non-DDP 475 multidestination transport capability. 477 The same buffer may be enabled by ddp_post_recv() on multiple 478 sockets. In this case the ddp_recv() untagged message reception 479 indication may be provided on a different socket from that on which 480 the buffer was posted. Such indications are not ordered among 481 multiple DDP sockets. 483 When multiple sockets reference an untagged message reception 484 buffer, local interfaces are responsible for managing the 485 mechanisms of allocating posted buffers to received untagged 486 messages, the handling of received untagged messages when no buffer 487 is available, and of resource management among multiple sockets. 488 Where underprovisioning of buffers on multiple sockets is allowed, 489 mechanisms should be provided to manage buffer consumption on a 490 per-socket or group of related sockets basis. 492 2.2. Remote Direct Memory Access (RDMA) Protocol Architecture 494 Remote Direct Memory Access (RDMA) extends the capabilities of DDP 495 with the ability to read from buffers registered to a socket (RDMA 496 Read). This allows a client protocol to perform arbitrary, 497 bidirectional data movement without involving the remote client. 498 When RDMA is implemented in hardware, arbitrary data movement can 499 be performed without involving the remote host CPU at all. 501 In addition, RDMA protocols usually specify a transport-independent 502 untagged message service (Send) with characteristics which are both 503 very efficient to implement in hardware, and convenient for client 504 protocols. 506 The RDMA architecture is patterned after the traditional model for 507 device programming, where the client requests an operation using 508 Send-like actions (programmed I/O), the server performs the 509 necessary data transfers for the operation (DMA reads and writes), 510 and notifies the client of completion. The programmed I/O+DMA 511 model efficiently supports a high degree of concurrency and 512 flexibility for both the client and server, even when operations 513 have a wide range of intrinsic latencies. 515 RDMA is layered as a client protocol on top of DDP: 517 Client Protocol 518 | ^ 519 Sends | | Send reception indications 520 RDMA Read Requests | | RDMA Read Completion indications 521 RDMA Writes | | RDMA Write Completion indications 522 v | 523 RDMA 524 | ^ 525 untagged messages | | untagged message delivery 526 tagged messages | | tagged message delivery 527 v | 528 DDP+---> data placement 529 ^ 530 | transport messages 531 v 532 . . . 534 In addition to in-line data flow, read (get()) and update (set()) 535 operations are performed on buffers registered with RDMA as a 536 result of RDMA Read Requests and RDMA Writes, respectively. 538 An RDMA `buffer' extends a DDP buffer with a get() operation that 539 retrieves the value of the octet at address `a': 541 typedef struct { 542 const address_t start; 543 const address_t end; 544 void set(address_t a, data_t v); 545 data_t get(address_t a); 546 } rdma_buffer_t; 548 2.2.1. RDMA Operations 550 The RDMA layer provides: 552 void rdma_send(socket_t s, message_t m); 553 void rdma_write(socket_t s, message_t m, ddp_addr_t d, 554 rdma_notify_t n); 555 void rdma_read(socket_t s, ddp_addr_t s, ddp_addr_t d); 556 void rdma_post_recv(socket_t s, bdesc_t b); 557 rdma_ind_t rdma_recv(socket_t s); 558 bdesc_t rdma_register(socket_t s, rdma_buffer_t b, 559 bmode_t mode); 560 void rdma_deregister(bhand_t bh); 561 msizes_t rdma_max_msizes(socket_t s); 563 Although, for clarity, these data transfer interfaces are 564 synchronous, rdma_read() and possibly rdma_send() (in the presence 565 of Send flow control), can require an arbitrary amount of time to 566 complete. To express the full concurrency and interleaving of RDMA 567 data transfer, these interfaces should also be reentrant. For 568 example, a client protocol may perform an rdma_send(), while an 569 rdma_read() operation is in progress. 571 rdma_notify_t 573 RDMA Write notification information, used to signal that the 574 message represents the final fragment of a multi-segmented 575 RDMA message: 577 typedef struct { 578 boolean_t notify; 579 rdma_write_id_t i; 580 } rdma_notify_t; 582 identical in function to ddp_notify_t, except that the type 583 rdma_write_id_t may not be equivalent to ddp_msg_id_t. 585 rdma_write_id_t (scalar) 587 an RDMA Write identifier. 589 rdma_ind_t 591 a Send message, or an RDMA error: 593 typedef union { 594 recv_message_t m; 595 rdma_err_t e; 596 } rdma_ind_t; 598 rdma_err_t 600 an RDMA protocol error indication. RDMA errors include buffer 601 addressing errors corresponding to ddp_err_ts, and buffer 602 protection violations (e.g. RDMA Writing a buffer only 603 registered for reading). 605 bmode_t 607 buffer registration mode (permissions). Any combination of 608 permitting RDMA Read (BMODE_READ) and RDMA Write (BMODE_WRITE) 609 operations. 611 rdma_send(socket_t s, message_t m) 613 send a message, delivering it to the next untagged RDMA buffer 614 at the remote peer. 616 rdma_write(socket_t s, message_t m, ddp_addr_t d, rdma_notify_t n) 618 RDMA Write to remote buffer address d. 620 rdma_read(socket_t s, ddp_addr_t s, length l, ddp_addr_t d) 622 RDMA Read l octets from remote buffer address s to local 623 buffer address d. 625 rdma_post_recv(socket_t s, bdesc_t b) 627 post a registered buffer to accept a single Send message, to 628 be filled and returned in-order to a subsequent caller of 629 rdma_recv(). As with DDP, buffers may be enabled on multiple 630 sockets, in which case ordering guarantees are relaxed. Also 631 as with DDP, local interfaces must manage the mechanisms of 632 allocation and management of buffers posted to multiple 633 sockets. 635 rdma_recv(socket_t s); 637 get the next received Send message, RDMA Write completion 638 identifier, or RDMA error. 640 rdma_register(socket_t s, rdma_buffer_t b, bmode_t mode) 642 register a buffer for RDMA on a socket (for read access, write 643 access or both). As with DDP, the same buffer may be 644 registered multiple times on the same or different sockets, 645 and different buffers may refer to portions of the same 646 underlying addressable object. 648 rdma_deregister(bhand_t bh) 650 remove a registration from a buffer. 652 rdma_max_msizes(socket_t s) 654 get the current maximum Send (max_untagged) and RDMA Read or 655 Write (max_tagged) operations that will fit in a single 656 transport message. The values returned by rdma_max_msizes() 657 are closely related to the values returned by 658 ddp_max_msizes(), but may not be equal. 660 2.2.2. Transport Characteristics In RDMA 662 As with DDP, RDMA can be used on transports with a variety of 663 different characteristics that manifest themselves directly in the 664 service provided by RDMA. 666 Like DDP, an RDMA protocol must specify how: 668 o set()s, 670 o get()s, 672 o Send messages, and 674 o RDMA Read completions 676 are ordered among themselves and how they relate to corresponding 677 operations on the remote peer(s). These relationships are likely 678 to be a function of the underlying transport characteristics. 680 There are some additional characteristics of RDMA which may 681 translate poorly to unreliable or multipoint transports due to 682 attendant complexities in managing endpoint state: 684 o Send flow control 686 o RDMA Read 688 These difficulties can be overcome by placing restrictions on the 689 service provided by RDMA. However, many RDMA clients, especially 690 those that separate data transfer and application logic concerns, 691 are likely to depend upon capabilities only provided by RDMA on a 692 point-to-point, reliable transport. 694 3. Security Considerations 696 System integrity must be maintained in any RDMA solution. 697 Mechanisms must be specified to prevent RDMA or DDP operations from 698 impairing system integrity. For example, the threat caused by 699 potential buffer overflow needs full examination, and prevention 700 mechanisms must be spelled out. 702 Because a Steering Tag exports access to a memory region, one 703 critical aspect of security is the scope of this access. It must 704 be possible to individually control specific attributes of the 705 access provided by a Steering Tag, including remote read access, 706 remote write access, and others that might be identified. DDP and 707 RDMA specifications must provide both implementation requirements 708 relevant to this issue, and guidelines to assist implementors in 709 making the appropriate design decisions. 711 Resource issues leading to denial-of-service attacks, overwrites 712 and other concurrent operations, the ordering of completions as 713 required by the RDMA protocol, and the granularity of transfer are 714 all within the required scope of any security analysis of RDMA and 715 DDP. 717 4. IANA Considerations 719 IANA considerations are not addressed in by this document. Any 720 IANA considerations resulting from the use of DDP or RDMA must be 721 addressed in the relevant standards. 723 5. Acknowledgements 725 The authors wish to acknowledge the valuable contributions of 726 Caitlin Bestler, David Black, Jeff Mogul and Allyn Romanow. 728 6. Informative References 730 [DAFS] 731 DAFS Collaborative, "Direct Access File System Specification 732 v1.0", September 2001, available from 733 http://www.dafscollaborative.org 735 [FCVI] 736 ANSI Technical Committee T11, "Fibre Channel Standard Virtual 737 Interface Architecture Mapping", ANSI/NCITS 357-2001, March 738 2001, available from http://www.t11.org/t11/stat.nsf/fcproj 740 [IB] InfiniBand Trade Association, "InfiniBand Architecture 741 Specification Volumes 1 and 2", Release 1.1, November 2002, 742 available from http://www.infinibandta.org/specs 744 [MYR] 745 VMEbus International Trade Association, "Myrinet on VME 746 Protocol Specification", ANSI/VITA 26-1998, August 1998, 747 available from http://www.myri.com/open-specs 749 [ROM] 750 A. Romanow, J. Mogul, T. Talpey and S. Bailey, "RDMA over IP 751 Problem Statement", draft-ietf-rddp-problem-statement-02, Work 752 in Progress, June 2003 753 RFC Editor note: Replace problem statement draft-ietf- name, status and 754 date with appropriate reference when assigned. 756 [SCTP] 757 R. Stewart et al., "Stream Transmission Control Protocol", RFC 758 2960, Standards Track 760 [SDP] 761 InfiniBand Trade Association, "Sockets Direct Protocol v1.0", 762 Annex A of InfiniBand Architecture Specification Volume 1, 763 Release 1.1, November 2002, available from 764 http://www.infinibandta.org/specs 766 [SRVNET] 767 R. Horst, "TNet: A reliable system area network", IEEE Micro, 768 pp. 37-45, February 1995 770 [VI] Compaq Computer Corp., Intel Corporation and Microsoft 771 Corporation, "Virtual Interface Architecture Specification 772 Version 1.0", December 1997, available from 773 http://www.vidf.org/info/04standards.html 775 Authors' Addresses 777 Stephen Bailey 778 Sandburst Corporation 779 600 Federal Street 780 Andover, MA 01810 USA 781 USA 783 Phone: +1 978 689 1614 784 Email: steph@sandburst.com 785 Tom Talpey 786 Network Appliance 787 375 Totten Pond Road 788 Waltham, MA 02451 USA 790 Phone: +1 781 768 5329 791 Email: thomas.talpey@netapp.com 793 Full Copyright Statement 795 Copyright (C) The Internet Society (2003). All Rights Reserved. 797 This document and translations of it may be copied and furnished to 798 others, and derivative works that comment on or otherwise explain 799 it or assist in its implementation may be prepared, copied, 800 published and distributed, in whole or in part, without restriction 801 of any kind, provided that the above copyright notice and this 802 paragraph are included on all such copies and derivative works. 803 However, this document itself may not be modified in any way, such 804 as by removing the copyright notice or references to the Internet 805 Society or other Internet organizations, except as needed for the 806 purpose of developing Internet standards in which case the 807 procedures for copyrights defined in the Internet Standards process 808 must be followed, or as required to translate it into languages 809 other than English. 811 The limited permissions granted above are perpetual and will not be 812 revoked by the Internet Society or its successors or assigns. 814 This document and the information contained herein is provided on 815 an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET 816 ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR 817 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 818 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 819 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.