idnits 2.17.1 draft-ietf-rddp-arch-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'DAFS' is defined on line 684, but no explicit reference was found in the text == Unused Reference: 'FCVI' is defined on line 689, but no explicit reference was found in the text == Unused Reference: 'IB' is defined on line 694, but no explicit reference was found in the text == Unused Reference: 'MYR' is defined on line 698, but no explicit reference was found in the text == Unused Reference: 'SDP' is defined on line 716, but no explicit reference was found in the text == Unused Reference: 'SRVNET' is defined on line 722, but no explicit reference was found in the text == Unused Reference: 'VI' is defined on line 726, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-ietf-rddp-problem-statement-02 -- Obsolete informational reference (is this intentional?): RFC 2960 (ref. 'SCTP') (Obsoleted by RFC 4960) Summary: 1 error (**), 0 flaws (~~), 10 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet-Draft Stephen Bailey (Sandburst) 3 Expires: December 2003 Tom Talpey (NetApp) 5 The Architecture of Direct Data Placement (DDP) 6 and Remote Direct Memory Access (RDMA) 7 on Internet Protocols 8 draft-ietf-rddp-arch-02 10 Status of this Memo 12 This document is an Internet-Draft and is in full conformance with 13 all provisions of Section 10 of RFC2026. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six 21 months and may be updated, replaced, or obsoleted by other 22 documents at any time. It is inappropriate to use Internet-Drafts 23 as reference material or to cite them other than as "work in 24 progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 Copyright Notice 34 Copyright (C) The Internet Society (2003). All Rights Reserved. 36 Abstract 38 This document defines an abstract architecture for Direct Data 39 Placement (DDP) and Remote Direct Memory Access (RDMA) protocols to 40 run on Internet Protocol-suite transports. This architecture does 41 not necessarily reflect the proper way to implement such protocols, 42 but is, rather, a descriptive tool for defining and understanding 43 the protocols. DDP allows the efficient placement of data into 44 buffers designated by Upper Layer Protocols (e.g. RDMA). RDMA 45 provides the semantics to enable Remote Direct Memory Access 46 between peers in a way consistent with application requirements. 48 Table Of Contents 50 1. Introduction . . . . . . . . . . . . . . . . . . . . . . 2 51 2. Architecture . . . . . . . . . . . . . . . . . . . . . . 3 52 2.1. Direct Data Placement (DDP) Protocol Architecture . . . 3 53 2.1.1. Transport Operations . . . . . . . . . . . . . . . . . . 5 54 2.1.2. DDP Operations . . . . . . . . . . . . . . . . . . . . . 6 55 2.1.3. Transport Characteristics in DDP . . . . . . . . . . . . 9 56 2.2. Remote Direct Memory Access Protocol Architecture . . . 10 57 2.2.1. RDMA Operations . . . . . . . . . . . . . . . . . . . . 12 58 2.2.2. Transport Characteristics in RDMA . . . . . . . . . . . 14 59 3. Security Considerations . . . . . . . . . . . . . . . . 14 60 4. IANA Considerations . . . . . . . . . . . . . . . . . . 15 61 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . 15 62 Informative References . . . . . . . . . . . . . . . . . 15 63 Authors' Addresses . . . . . . . . . . . . . . . . . . . 16 64 Full Copyright Statement . . . . . . . . . . . . . . . . 17 66 1. Introduction 68 This document defines an abstract architecture for Direct Data 69 Placement (DDP) and Remote Direct Memory Access (RDMA) protocols to 70 run on Internet Protocol-suite transports. This architecture does 71 not necessarily reflect the proper way to implement such protocols, 72 but is, rather, a descriptive tool for defining and understanding 73 the protocols. 75 The first part of the document describes the architecture of DDP 76 protocols, including what assumptions are made about the transports 77 on which DDP is built. The second part describes the architecture 78 of RDMA protocols layered on top of DDP. 80 Before introducing the protocols, three definitions will be useful 81 to guide discussion: 83 o Placement - writing to a data buffer. 85 o Delivery - informing the Upper Layer Protocol (ULP) (e.g. 86 RDMA) that a particular message is available for use. 87 Delivery therefore may be viewed as the "control" signal 88 associated with a unit of data. Note that the order of 89 delivery is defined more strictly than it is for placement. 91 o Completion - informing the ULP or application that a 92 particular RDMA operation has finished. A completion, for 93 instance, may require the delivery of several messages, or it 94 may also reflect that some local processing has finished. 96 The goal of the DDP protocol is to allow the efficient placement of 97 data into buffers designated by Upper Layer Protocols (e.g. RDMA). 98 This is described in detail in [ROM]. Efficiency may be 99 characterized by the minimization of the number of transfers of the 100 data over the receiver's system buses. 102 The goal of the RDMA protocol is to provide the semantics to enable 103 Remote Direct Memory Access between peers in a way consistent with 104 application requirements. The RDMA protocol provides facilities 105 immediately useful to existing and future networking, storage, and 106 other application protocols. [DAFS, FCVI, IB, MYR, SDP, SRVNET, 107 VI] 109 The DDP and RDMA protocols work together to achieve their 110 respective goals. DDP provides facilities to safely steer payloads 111 to specific buffers at the Data Sink. RDMA provides facilities to 112 a ULP for identifying these buffers, controlling the transfer of 113 data between ULP peers, and signalling completion to the ULP. ULPs 114 that do not require the features of RDMA may be layered directly on 115 top of DDP. 117 The DDP and RDMA protocols are transport independent. The 118 following figure shows the relationship between RDMA, DDP, Upper 119 Layer Protocols and Transport. 121 +---------------------------------------------------+ 122 | ULP | 123 +---------+------------+----------------------------+ 124 | | | RDMA | 125 | | +----------------------------+ 126 | | DDP | 127 | +-----------------------------------------+ 128 | Transport | 129 +---------------------------------------------------+ 131 2. Architecture 133 The Architecture section is presented in two parts: Direct Data 134 Placement Protocol architecture and Remote Direct Memory Access 135 Protocol architecture. 137 2.1. Direct Data Placement (DDP) Protocol Architecture 139 The central idea of general-purpose DDP is that a data sender will 140 supplement the data it sends with placement information that allows 141 the receiver's network interface to place the data directly at its 142 final destination without any copying. DDP can be used to steer 143 received data to its final destination, without requiring layer- 144 specific behavior for each different layer. Data sent with such 145 DDP information is said to be `tagged'. 147 The central component of the DDP architecture is the `buffer', 148 which is an object with beginning and ending addresses, and a 149 method (set()) to set the value of an octet at an address. In many 150 cases, a buffer corresponds directly to a portion of host user 151 memory. However, DDP does not depend on this---a buffer could be a 152 disk file, or anything else that can be viewed as an addressable 153 collection of octets. Abstractly, a buffer provides the interface: 155 typedef struct { 156 const address_t start; 157 const address_t end; 158 void set(address_t a, data_t v); 159 } ddp_buffer_t; 161 address_t 163 a reference to local memory 165 data_t 167 an octet data value. 169 The protocol layering and in-line data flow of DDP is: 171 Client Protocol 172 (e.g. ULP or RDMA) 173 | ^ 174 untagged messages | | untagged message delivery 175 tagged messages | | tagged message delivery 176 v | 177 DDP+---> data placement 178 ^ 179 | transport messages 180 v 181 Transport 182 (e.g. SCTP, DCP, framed TCP) 183 ^ 184 | IP datagrams 185 v 186 . . . 188 In addition to in-line data flow, the client protocol registers 189 buffers with DDP, and DDP performs buffer update (set()) operations 190 as a result of receiving tagged messages. 192 DDP messages may be split into multiple, smaller DDP messages, each 193 in a separate transport message. However, if the transport is 194 unreliable or unordered, messages split across transport messages 195 may or may not provide useful behavior, in the same way as 196 splitting arbitrary upper layer messages across unreliable or 197 unordered transport messages may or may not provide useful 198 behavior. In other words, the same considerations apply to 199 building client protocols on different types of transports with or 200 without the use of DDP. 202 A DDP message split across transport messages looks like: 204 DDP message: Transport messages: 206 stag=s, offset=o, message 1: 207 notify=y, id=i |type=ddp | 208 message= |stag=s | 209 |aabbccddee|-------. |offset=o | 210 ~ ... ~----. \ |notify=n | 211 |vvwwxxyyzz|-. \ \ |id=? | 212 | \ `--->|aabbccddee| 213 | \ ~ ... ~ 214 | +----->|iijjkkllmm| 215 | | 216 + | message 2: 217 \ | |type=ddp | 218 \ | |stag=s | 219 \ + |offset=o+n| 220 \ \ |notify=y | 221 \ \ |id=i | 222 \ `-->|nnooppqqrr| 223 \ ~ ... ~ 224 `---->|vvwwxxyyzz| 226 Although this picture suggests that DDP information is carried in- 227 line with the message payload, components of the DDP information 228 may also be in transport-specific fields, or derived from 229 transport-specific control information if the transport permits. 231 2.1.1. Transport Operations 233 For the purposes of this architecture, the transport provides: 235 void xpt_send(socket_t s, message_t m); 236 message_t xpt_recv(socket_t s); 237 msize_t xpt_max_msize(socket_t s); 239 socket_t 241 a transport address, including IP addresses, ports and other 242 transport-specific identifiers. 244 message_t 246 a string of octets. 248 msize_t (scalar) 250 a message size. 252 xpt_send(socket_t s, message_t m) 254 send a transport message. 256 xpt_recv(socket_t s) 258 receive a transport message. 260 xpt_max_msize(socket_t s) 262 get the current maximum transport message size. Corresponds, 263 roughly, to the current path Maximum Transfer Unit (PMTU), 264 adjusted by underlying protocol overheads. 266 Real implementations of xpt_send() and xpt_recv() typically return 267 error indications, but that is not relevant to this architecture. 269 2.1.2. DDP Operations 271 The DDP layer provides: 273 void ddp_send(socket_t s, message_t m); 274 void ddp_send_ddp(socket_t s, message_t m, ddp_addr_t d, 275 ddp_notify_t n); 276 ddp_ind_t ddp_recv(socket_t s); 277 bdesc_t ddp_register(socket_t s, ddp_buffer_t b); 278 void ddp_deregister(bhand_t bh); 279 msizes_t ddp_max_msizes(socket_t s); 281 ddp_addr_t 283 the buffer address portion of a tagged message: 285 typedef struct { 286 stag_t stag; 287 address_t offset; 288 } ddp_addr_t; 290 stag_t (scalar) 292 a Steering Tag. A stag_t identifies the destination buffer 293 for tagged messages. stag_ts are generated when the buffer is 294 registered, communicated to the sender by some client protocol 295 convention and inserted in DDP messages. stag_t values in 296 this DDP architecture are assumed to be completely opaque to 297 the client protocol, and implementation-dependent. However, 298 particular implementations, such as DDP on a multicast 299 transport (see below), may provide the buffer holder some 300 control in selecting stag_ts. 302 ddp_notify_t 304 the notification portion of a DDP message, used to signal that 305 the message represents the final fragment of a multi-segmented 306 DDP message: 308 typedef struct { 309 boolean_t notify; 310 ddp_msg_id_t i; 311 } ddp_notify_t; 313 ddp_msg_id_t (scalar) 315 a DDP message identifier. msg_id_ts are chosen by the DDP 316 message receiver (buffer holder), communicated to the sender 317 by some client protocol convention and inserted in DDP 318 messages. Whether a message reception indication is requested 319 for a DDP message is a matter of client protocol convention. 320 Unlike stag_ts, the structure of msg_id_ts is opaque to DDP, 321 and therefore, completely in the hands of the client protocol. 323 bdesc_t 325 a description of a registered buffer: 327 typedef struct { 328 bhand_t bh; 329 ddp_addr_t a; 330 } bdesc_t; 332 `a.offset' is the starting offset of the registered buffer, 333 which may have no relationship to the `start' or `end' 334 addresses of that buffer. However, particular 335 implementations, such as DDP on a multicast transport (see 336 below), may allow some client protocol control over the 337 starting offset. 339 bhand_t 341 an opaque buffer handle used to deregister a buffer. 343 ddp_ind_t 345 an untagged message, a tagged message reception indication, or 346 a tagged message reception error: 348 typedef union { 349 message_t m; 350 ddp_msg_id_t i; 351 ddp_err_t e; 352 } ddp_ind_t; 354 ddp_err_t 356 indicates an error while receiving a tagged message, typically 357 `offset' out of bounds, or `stag' is not registered to the 358 socket. 360 msizes_t 362 The maximum untagged and tagged messages that fit in a single 363 transport message: 365 typedef struct { 366 msize_t max_untagged; 367 msize_t max_tagged; 368 } msizes_t; 370 ddp_send(socket_t s, message_t m) 371 send an untagged message. 373 ddp_send_ddp(socket_t s, message_t m, ddp_addr_t d, ddp_notify_t n) 375 send a tagged message to remote buffer address d. 377 ddp_recv(socket_t s) 379 get the next received untagged message, tagged message 380 reception indication, or tagged message error. 382 ddp_register(socket_t s, ddp_buffer_t b) 384 register a buffer for DDP on a socket. The same buffer may be 385 registered multiple times on the same or different sockets. 386 The same buffer registered on different sockets may result in 387 a common registration. Different buffers may also refer to 388 portions of the same underlying addressable object (buffer 389 aliasing). 391 ddp_deregister(bhand_t bh) 393 remove a registration from a buffer. 395 ddp_max_msizes(socket_t s) 397 get the current maximum untagged and tagged message sizes that 398 will fit in a single transport message. 400 2.1.3. Transport Characteristics In DDP 402 Certain characteristics of the transport on which DDP is mapped 403 determine the nature of the service provided to client protocols. 404 Specifically, transports are: 406 o reliable or unreliable, 408 o ordered or unordered, 410 o single source or multisource, 412 o single destination or multidestination (multicast or anycast). 414 Some transports support several combinations of these 415 characteristics. For example, SCTP [SCTP] is reliable, single 416 source, single destination (point-to-point) and supports both 417 ordered and unordered modes. 419 DDP messages carried by transport are framed for processing by the 420 receiver, and may be further protected for integrity or privacy in 421 accordance with the transport capabilities. DDP does not provide 422 such functions. 424 In general, transport characteristics equally affect transport and 425 DDP message delivery. However, there are several issues specific 426 to DDP messages. 428 A key component of DDP is how the following operations on the 429 receiving side are ordered among themselves, and how they relate to 430 corresponding operations on the sending side: 432 o set()s, 434 o untagged message reception indications, and 436 o tagged message reception indications. 438 These relationships depend upon the characteristics of the 439 underlying transport in a way which is defined by the DDP protocol. 440 For example, if the transport is unreliable and unordered, the DDP 441 protocol might specify that the client protocol is subject to the 442 consequences of transport messages being lost or duplicated, rather 443 than requiring different characteristics be presented to the client 444 protocol. 446 Multidestination data delivery is the other transport 447 characteristic which may require specific consideration in a DDP 448 protocol. As mentioned above, the basic DDP model assumes that 449 buffer address values returned by ddp_register() are opaque to the 450 client protocol, and can be implementation dependent. The most 451 natural way to map DDP to a multidestination transport is to 452 require all receivers produce the same buffer address when 453 registering a multidestination destination buffer. Restriction of 454 the DDP model to accommodate multiple destinations involves 455 engineering tradeoffs comparable to those of providing non-DDP 456 multidestination transport capability. 458 2.2. Remote Direct Memory Access (RDMA) Protocol Architecture 460 Remote Direct Memory Access (RDMA) extends the capabilities of DDP 461 with the ability to read from buffers registered to a socket (RDMA 462 Read). This allows a client protocol to perform arbitrary, 463 bidirectional data movement without involving the remote client. 464 When RDMA is implemented in hardware, arbitrary data movement can 465 be performed without involving the remote host CPU at all. 467 In addition, RDMA protocols usually specify a transport-independent 468 untagged message service (Send) with characteristics which are both 469 very efficient to implement in hardware, and convenient for client 470 protocols. 472 The RDMA architecture is patterned after the traditional model for 473 device programming, where the client requests an operation using 474 Send-like actions (programmed I/O), the server performs the 475 necessary data transfers for the operation (DMA reads and writes), 476 and notifies the client of completion. The programmed I/O+DMA 477 model efficiently supports a high degree of concurrency and 478 flexibility for both the client and server, even when operations 479 have a wide range of intrinsic latencies. 481 RDMA is layered as a client protocol on top of DDP: 483 Client Protocol 484 | ^ 485 Sends | | Send reception indications 486 RDMA Read Requests | | RDMA Read Completion indications 487 RDMA Writes | | RDMA Write Completion indications 488 v | 489 RDMA 490 | ^ 491 untagged messages | | untagged message delivery 492 tagged messages | | tagged message delivery 493 v | 494 DDP+---> data placement 495 ^ 496 | transport messages 497 v 498 . . . 500 In addition to in-line data flow, read (get()) and update (set()) 501 operations are performed on buffers registered with RDMA as a 502 result of RDMA Read Requests and RDMA Writes, respectively. 504 An RDMA `buffer' extends a DDP buffer with a get() operation that 505 retrieves the value of the octet at address `a': 507 typedef struct { 508 const address_t start; 509 const address_t end; 510 void set(address_t a, data_t v); 511 data_t get(address_t a); 512 } rdma_buffer_t; 514 2.2.1. RDMA Operations 516 The RDMA layer provides: 518 void rdma_send(socket_t s, message_t m); 519 void rdma_write(socket_t s, message_t m, ddp_addr_t d, 520 rdma_notify_t n); 521 void rdma_read(socket_t s, ddp_addr_t s, ddp_addr_t d); 522 rdma_ind_t rdma_recv(socket_t s); 523 bdesc_t rdma_register(socket_t s, rdma_buffer_t b, 524 bmode_t mode); 525 void rdma_deregister(bhand_t bh); 526 msizes_t rdma_max_msizes(socket_t s); 528 Although, for clarity, these data transfer interfaces are 529 synchronous, rdma_read() and possibly rdma_send() (in the presence 530 of Send flow control), can require an arbitrary amount of time to 531 complete. To express the full concurrency and interleaving of RDMA 532 data transfer, these interfaces should also be reentrant. For 533 example, a client protocol may perform an rdma_send(), while an 534 rdma_read() operation is in progress. 536 rdma_notify_t 538 RDMA Write notification information, used to signal that the 539 message represents the final fragment of a multi-segmented 540 RDMA message: 542 typedef struct { 543 boolean_t notify; 544 rdma_write_id_t i; 545 } rdma_notify_t; 547 identical in function to ddp_notify_t, except that the type 548 rdma_write_id_t may not be equivalent to ddp_msg_id_t. 550 rdma_write_id_t (scalar) 552 an RDMA Write identifier. 554 rdma_ind_t 556 a Send message, or an RDMA error: 558 typedef union { 559 message_t m; 560 rdma_err_t e; 561 } rdma_ind_t; 563 rdma_err_t 565 an RDMA protocol error indication. RDMA errors include buffer 566 addressing errors corresponding to ddp_err_ts, and buffer 567 protection violations (e.g. RDMA Writing a buffer only 568 registered for reading). 570 bmode_t 572 buffer registration mode (permissions). Any combination of 573 permitting RDMA Read (BMODE_READ) and RDMA Write (BMODE_WRITE) 574 operations. 576 rdma_send(socket_t s, message_t m) 578 send a message, delivering it to the next untagged RDMA buffer 579 at the remote peer. 581 rdma_write(socket_t s, message_t m, ddp_addr_t d, rdma_notify_t n) 583 RDMA Write to remote buffer address d. 585 rdma_read(socket_t s, ddp_addr_t s, length l, ddp_addr_t d) 587 RDMA Read l octets from remote buffer address s to local 588 buffer address d. 590 rdma_recv(socket_t s); 592 get the next received Send message, RDMA Write completion 593 identifier, or RDMA error. 595 rdma_register(socket_t s, rdma_buffer_t b, bmode_t mode) 597 register a buffer for RDMA on a socket (for read access, write 598 access or both). As with DDP, the same buffer may be 599 registered multiple times on the same or different sockets, 600 and different buffers may refer to portions of the same 601 underlying addressable object. 603 rdma_deregister(bhand_t bh) 604 remove a registration from a buffer. 606 rdma_max_msizes(socket_t s) 608 get the current maximum Send (max_untagged) and RDMA Read or 609 Write (max_tagged) operations that will fit in a single 610 transport message. The values returned by rdma_max_msizes() 611 are closely related to the values returned by 612 ddp_max_msizes(), but may not be equal. 614 2.2.2. Transport Characteristics In RDMA 616 As with DDP, RDMA can be used on transports with a variety of 617 different characteristics that manifest themselves directly in the 618 service provided by RDMA. 620 Like DDP, an RDMA protocol must specify how: 622 o set()s, 624 o get()s, 626 o Send messages, and 628 o RDMA Read completions 630 are ordered among themselves and how they relate to corresponding 631 operations on the remote peer(s). These relationships are likely 632 to be a function of the underlying transport characteristics. 634 There are some additional characteristics of RDMA which may 635 translate poorly to unreliable or multipoint transports due to 636 attendant complexities in managing endpoint state: 638 o Send flow control 640 o RDMA Read 642 These difficulties can be overcome by placing restrictions on the 643 service provided by RDMA. However, many RDMA clients, especially 644 those that separate data transfer and application logic concerns, 645 are likely to depend upon capabilities only provided by RDMA on a 646 point-to-point, reliable transport. 648 3. Security Considerations 650 System integrity must be maintained in any RDMA solution. 651 Mechanisms must be specified to prevent RDMA or DDP operations from 652 impairing system integrity. For example, the threat caused by 653 potential buffer overflow needs full examination, and prevention 654 mechanisms must be spelled out. 656 Because a Steering Tag exports access to a memory region, one 657 critical aspect of security is the scope of this access. It must 658 be possible to individually control specific attributes of the 659 access provided by a Steering Tag, including remote read access, 660 remote write access, and others that might be identified. DDP and 661 RDMA specifications must provide both implementation requirements 662 relevant to this issue, and guidelines to assist implementors in 663 making the appropriate design decisions. 665 Resource issues leading to denial-of-service attacks, overwrites 666 and other concurrent operations, the ordering of completions as 667 required by the RDMA protocol, and the granularity of transfer are 668 all within the required scope of any security analysis of RDMA and 669 DDP. 671 4. IANA Considerations 673 IANA considerations are not addressed in by this document. Any 674 IANA considerations resulting from the use of DDP or RDMA must be 675 addressed in the relevant standards. 677 5. Acknowledgements 679 The authors wish to acknowledge the valuable contributions of David 680 Black, Jeff Mogul and Allyn Romanow. 682 6. Informative References 684 [DAFS] 685 DAFS Collaborative, "Direct Access File System Specification 686 v1.0", September 2001, available from 687 http://www.dafscollaborative.org 689 [FCVI] 690 ANSI Technical Committee T11, "Fibre Channel Standard Virtual 691 Interface Architecture Mapping", ANSI/NCITS 357-2001, March 692 2001, available from http://www.t11.org/t11/stat.nsf/fcproj 694 [IB] InfiniBand Trade Association, "InfiniBand Architecture 695 Specification Volumes 1 and 2", Release 1.1, November 2002, 696 available from http://www.infinibandta.org/specs 698 [MYR] 699 VMEbus International Trade Association, "Myrinet on VME 700 Protocol Specification", ANSI/VITA 26-1998, August 1998, 701 available from http://www.myri.com/open-specs 703 RFC Editor note: 704 Replace following problem statement draft-ietf- name, status and 705 date with appropriate reference when assigned. 707 [ROM] 708 A. Romanow, J. Mogul, T. Talpey and S. Bailey, "RDMA over IP 709 Problem Statement", draft-ietf-rddp-problem-statement-02, Work 710 in Progress, June 2003 712 [SCTP] 713 R. Stewart et al., "Stream Transmission Control Protocol", RFC 714 2960, Standards Track 716 [SDP] 717 InfiniBand Trade Association, "Sockets Direct Protocol v1.0", 718 Annex A of InfiniBand Architecture Specification Volume 1, 719 Release 1.1, November 2002, available from 720 http://www.infinibandta.org/specs 722 [SRVNET] 723 R. Horst, "TNet: A reliable system area network", IEEE Micro, 724 pp. 37-45, February 1995 726 [VI] Compaq Computer Corp., Intel Corporation and Microsoft 727 Corporation, "Virtual Interface Architecture Specification 728 Version 1.0", December 1997, available from 729 http://www.vidf.org/info/04standards.html 731 Authors' Addresses 733 Stephen Bailey 734 Sandburst Corporation 735 600 Federal Street 736 Andover, MA 01810 USA 737 USA 739 Phone: +1 978 689 1614 740 Email: steph@sandburst.com 741 Tom Talpey 742 Network Appliance 743 375 Totten Pond Road 744 Waltham, MA 02451 USA 746 Phone: +1 781 768 5329 747 Email: thomas.talpey@netapp.com 749 Full Copyright Statement 751 Copyright (C) The Internet Society (2003). All Rights Reserved. 753 This document and translations of it may be copied and furnished to 754 others, and derivative works that comment on or otherwise explain 755 it or assist in its implementation may be prepared, copied, 756 published and distributed, in whole or in part, without restriction 757 of any kind, provided that the above copyright notice and this 758 paragraph are included on all such copies and derivative works. 759 However, this document itself may not be modified in any way, such 760 as by removing the copyright notice or references to the Internet 761 Society or other Internet organizations, except as needed for the 762 purpose of developing Internet standards in which case the 763 procedures for copyrights defined in the Internet Standards process 764 must be followed, or as required to translate it into languages 765 other than English. 767 The limited permissions granted above are perpetual and will not be 768 revoked by the Internet Society or its successors or assigns. 770 This document and the information contained herein is provided on 771 an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET 772 ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR 773 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 774 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 775 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.