idnits 2.17.1 draft-irtf-dtnrg-ltp-motivation-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1285. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** The document seems to lack an RFC 3978 Section 5.4 Reference to BCP 78 -- however, there's a paragraph with a matching beginning. Boilerplate error? ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. ** The document seems to lack an RFC 3979 Section 5, para. 1 IPR Disclosure Acknowledgement. ** The document seems to lack an RFC 3979 Section 5, para. 2 IPR Disclosure Acknowledgement. ** The document seems to lack an RFC 3979 Section 5, para. 3 IPR Disclosure Invitation. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'Sec 6' is mentioned on line 874, but not defined == Unused Reference: 'IPN' is defined on line 1227, but no explicit reference was found in the text == Unused Reference: 'ECS94' is defined on line 1240, but no explicit reference was found in the text == Outdated reference: A later version (-10) exists of draft-irtf-dtnrg-ltp-03 ** Downref: Normative reference to an Experimental draft: draft-irtf-dtnrg-ltp (ref. 'LTP') == Outdated reference: A later version (-08) exists of draft-irtf-dtnrg-ltp-extensions-01 ** Downref: Normative reference to an Experimental draft: draft-irtf-dtnrg-ltp-extensions (ref. 'LTPEXT') -- Obsolete informational reference (is this intentional?): RFC 3448 (ref. 'TFRC') (Obsoleted by RFC 5348) -- Obsolete informational reference (is this intentional?): RFC 1750 (ref. 'ECS94') (Obsoleted by RFC 4086) -- Obsolete informational reference (is this intentional?): RFC 2960 (ref. 'SCTP') (Obsoleted by RFC 4960) Summary: 10 errors (**), 0 flaws (~~), 8 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Delay Tolerant Networking Research Group S. Burleigh 3 Internet Draft NASA/Jet Propulsion Laboratory 4 M. Ramadas 5 July 2005 Ohio University 6 Expires January 2006 S. Farrell 7 Trinity College Dublin 9 Licklider Transmission Protocol - Motivation 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than a "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/1id-abstracts.html 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html 34 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 35 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 36 document are to be interpreted as described in [B97]. 38 Discussions on this internet-draft are being made in the Delay 39 Tolerant Networking Research Group (DTNRG) mailing list. More 40 information can be found in the DTNRG web-site at 41 http://www.dtnrg.org 43 Abstract 45 This document describes the Licklider Transmission Protocol (LTP) 46 designed to provide retransmission-based reliability over links 47 characterized by extremely long message round-trip times (RTTs) 48 and/or frequent interruptions in connectivity. Since communication 49 across interplanetary space is the most prominent example of this 50 sort of environment, LTP is principally aimed at supporting "long- 51 haul" reliable transmission in interplanetary space, but has 52 applications in other environments as well. 54 In an Interplanetary Internet setting deploying the Bundling protocol 55 being developed by the Delay Tolerant Networking Research Group, LTP 56 is intended to serve as a reliable convergence layer over single hop 57 deep-space RF links. LTP does ARQ of data transmissions by soliciting 58 selective-acknowledgment reception reports. It is stateful and has 59 no negotiation or handshakes. 61 Table of Contents 63 1. Introduction ................................................. 3 64 2. Motivation ................................................... 4 65 2.1 IPN Operating Environment ................................ 4 66 2.2 Why not Standard Internet Protocols? ..................... 6 67 3. Features ..................................................... 7 68 3.1 Massive State Retention .................................. 8 69 3.1.1 Multiplicity of Protocol State Machines ............. 8 70 3.1.2 Session IDs ......................................... 9 71 3.1.3 Use of Non-volatile Storage ......................... 9 72 3.2 Absence of Negotiation ................................... 9 73 3.3 Partial Reliability ...................................... 10 74 3.4 Laconic Acknowledgment ................................... 11 75 3.5 Adjacency ................................................ 12 76 3.6 Optimistic and Dynamic Timeout Interval Computation ...... 13 77 3.7 Deferred Transmission .................................... 14 78 4. Overall Operation ............................................ 14 79 4.1 Nominal Operation ........................................ 14 80 4.2 Retransmission ........................................... 16 81 4.3 Accelerated Retransmission ............................... 18 82 4.4 Session Cancellation ..................................... 19 83 5. Functional Model ............................................. 20 84 5.1 Deferred Transmission .................................... 20 85 5.2 Timers ................................................... 20 86 6. Tracing LTP back to CFDP ..................................... 23 87 7. Security Considerations ...................................... 25 88 8. IANA Considerations .......................................... 25 89 9. Acknowledgments .............................................. 25 90 10. References ................................................... 25 91 10.1 Normative References ..................................... 25 92 10.2 Informative References ................................... 26 93 11. Author's Addresses ........................................... 26 94 12. Copyright Statement .......................................... 27 96 1. Introduction 98 The Licklider Transmission Protocol (LTP) described in this memo is 99 designed to provide retransmission-based reliability over links 100 characterized by extremely long message round-trip times and/or 101 frequent interruptions in connectivity. Communication in 102 interplanetary space is the most prominent example of this sort of 103 environment, and LTP is principally aimed at supporting "long-haul" 104 reliable transmission over deep-space RF links. 106 Since 1982 the principal source of standards for space communications 107 has been the Consultative Committee for Space Data Systems (CCSDS) 108 [CCSDS]. Engineers of CCSDS member agencies have developed 109 communication protocols that function within the constraints imposed 110 by operations in deep space. These constraints differ sharply from 111 those within which the Internet typically functions: 113 o Extremely long signal propagation delays, on the order of 114 seconds, minutes, or hours rather than milliseconds. 116 o Frequent and lengthy interruptions in connectivity. 118 o Low levels of traffic coupled with high rates of 119 transmission error. 121 o Meager bandwidth and highly asymmetrical data rates. 123 The CCSDS File Delivery Protocol (CFDP) [CFDP] in particular, 124 automates reliable file transfer across interplanetary distances by 125 detecting data loss and initiating the requisite retransmission 126 without mission operator intervention. 128 CFDP by itself is sufficient for operating individual missions, but 129 its built-in networking capabilities are rudimentary. In order to 130 operate within the IPN environment it must rely on the routing and 131 incremental retransmission capabilities of the Bundling protocol [BP] 132 defined for Delay-Tolerant Networks [DTN]. LTP is intended to serve 133 as a reliable "convergence layer" protocol underlying Bundling in DTN 134 regions whose links are characterized by very long round-trip times. 135 Its design notions are directly descended from the retransmission 136 procedures defined for CFDP. 138 This document describes the motivation for LTP, its features, 139 functions, and overall design, and is part of a series of documents 140 describing LTP. Other documents in the series include the main 141 protocol specification document [LTP] and the protocol extensions 142 document [LTPEXT] respectively. 144 2. Motivation 146 2.1 IPN Operating Environment 148 There are a number of fundamental differences between the environment 149 for terrestrial communications and the operating environments 150 envisioned for the IPN. 152 The most challenging difference between communication among points on 153 Earth and communication among planets is round-trip delay, of which 154 there are two main sources, both relatively intractable: natural law 155 and economics. 157 The more obvious type of delay imposed by nature is signal 158 propagation time. Our inability to transmit data at speeds higher 159 than the speed of light means that while round-trip times in the 160 terrestrial Internet range from milliseconds to a few seconds, 161 minimum round-trip times to Mars range from 8 to 40 minutes, 162 depending on the planet's position. Round-trip times between Earth 163 and Jupiter's moon Europa run between 66 and 100 minutes. 165 Less obvious and more dynamic is the delay imposed by occultation. 166 Communication between planets must be by radiant transmission, which 167 is usually possible only when the communicating entities are in line 168 of sight of each other. An entity residing on a planetary surface 169 will be periodically taken out of sight by the planet's rotation (it 170 will be "on the other side of" the planet); an entity that orbits a 171 planet will be periodically taken out of sight by orbital motion (it 172 will be "behind" the planet); and planets themselves lose mutual 173 visibility when their trajectories take them to opposite sides of the 174 Sun. During the time that communication is impossible, delivery is 175 impaired and messages must wait in a queue for later transmission. 177 Round-trip times and occultations can at least be readily computed 178 given the ephemerides of the communicating entities. Additional 179 delay that is less easily predictable is introduced by discontinuous 180 transmission support, which is rooted in economics. 182 Communicating over interplanetary distances requires expensive 183 special equipment: large antennas, high-performance receivers, etc. 184 For most deep-space missions, even non-NASA ones, these are currently 185 provided by NASA's Deep Space Network (DSN) [DSN]. The communication 186 resources of the DSN are currently oversubscribed and will probably 187 remain so for the foreseeable future. While studies have been done 188 as to the feasibility of upgrading or replacing the current DSN, the 189 number of deep space missions will probably continue to grow faster 190 than the terrestrial infrastructure that supports them, making over- 191 subscription a persistent problem. Radio contact via the DSN must 192 therefore be carefully scheduled and is often severely limited. 194 This over-subscription means that the round-trip times experienced by 195 packets will be affected not only by the signal propagation delay and 196 occultation, but also by the scheduling and queuing delays imposed by 197 management of Earth-based resources: packets to be sent to a given 198 destination may have to be queued until the next scheduled contact 199 period, which may be hours, days, or even weeks away. While queuing 200 and scheduling delays are generally known well in advance except when 201 missions need emergency service (such as during landings and 202 maneuvers), the long and highly variable delays make the design of 203 timers, and retransmission timers in particular, quite difficult. 205 Another significant difference between deep space and terrestrial 206 communication is bandwidth availability. The combined effects of 207 large distances (resulting in signal attenuation), the expense and 208 difficulty of deploying large antennas to distant planets, and the 209 difficulty of generating electric power in space all mean that the 210 available bandwidth for communication in the IPN will likely remain 211 modest compared to terrestrial systems. Maximum data rates on the 212 order of a few tens of megabits per second will probably be the norm 213 for the next few decades. 215 Moreover, the available bandwidths are highly asymmetrical: data are 216 typically transmitted at different rates in different directions on 217 the same link. Current missions are usually designed with a much 218 higher data "return" rate (from spacecraft to Earth) than "command" 219 rate (from Earth to spacecraft). The reason for the asymmetry is 220 simple: nobody ever wanted a high-rate command channel, and, all else 221 being equal, it was deemed better to have a more reliable command 222 channel than a faster one. This design choice has led to data rate 223 asymmetries in excess of 100:1, sometimes approaching 1000:1. A 224 strong desire for a very robust command channel will probably remain, 225 so any transport protocol designed for use in the IPN will need to 226 function with a relatively low-bandwidth outbound channel to 227 spacecraft and landers. 229 The difficulty of generating power on and around other planets will 230 also result in relatively low signal-to-noise ratios and thus high 231 bit error rates. Current deep-space missions operate with raw bit 232 error rates on the order of 10^(-1) to 10^(-3); while heavy coding is 233 used to reduce error rates, the coding overhead further reduces the 234 residual bandwidth available for data transmission. 236 Signal propagation delay is the only truly immutable characteristic 237 that distinguishes the IPN from terrestrial communications. Queuing 238 and scheduling delays, low data rates, intermittent connectivity, and 239 high bit error rates can all be mitigated or eliminated by adding 240 more infrastructure. But this additional infrastructure is likely to 241 be provided (if at all) only in the more highly developed core areas 242 of the IPN. We see the IPN growing outwards from Earth as we explore 243 more and more planets, moons, asteroids, and possibly other stars. 244 This suggests that there will always be a "fringe" to the fabric of 245 the IPN, an area without a rich communications infrastructure. The 246 delay, data rate, connectivity, and error characteristics mentioned 247 above will probably always be an issue somewhere in the IPN. 249 2.2 Why not Standard Internet Protocols? 251 These environmental characteristics - long delays, low and asymmetric 252 bandwidth, intermittent connectivity, and relatively high error rates 253 - make using unmodified TCP for end to end communications in the IPN 254 infeasible. Using the TCP throughput equation from [TFRC] we can 255 calculate the loss event rate (p) required to achieve a given steady- 256 state throughput. Assuming the minimum RTT to Mars from planet Earth 257 is 8 minutes (one-way speed of light delay to Mars at its closest 258 approach to Earth is 4 minutes), assuming a packet size of 1500 259 bytes, assuming that the receiver acknowledges every other packet, 260 and ignoring negligible higher order terms in p (i.e., ignoring the 261 second additive term in the denominator of the TCP throughput 262 equation), we obtain the following table of loss event rates required 263 to achieve various throughput values. 265 Throughput Loss event rate (p) 266 ---------- ------------------- 267 10 Mbps 4.68 * 10^(-12) 268 1 Mbps 4.68 * 10^(-10) 269 100 Kbps 4.68 * 10^(-8) 270 10 Kbps 4.68 * 10^(-6) 272 Note that although multiple losses encountered in a single RTT are 273 treated as a single loss event in the TCP throughput equation from 274 [TFRC], such loss event rates are still unrealistic on deep space 275 links. 277 The above values are upper bounds on steady-state throughput. Since 278 the number of packets in an episode of connectivity will generally be 279 under 10,000 due to the low available bandwidth, TCP performance 280 would be dominated by its behavior during slow-start. This means 281 that even when Mars is at its closest approach to Earth it would take 282 a TCP session nearly 100 minutes to ramp up to an Earth-Mars 283 transmission rate of 20kbps. 285 Note: Lab experiments using a channel emulator and standard 286 applications show that even if TCP could be pushed to work 287 efficiently at such distances, many applications either rely on 288 several rounds of handshaking or have built-in timers that render 289 them non-functional when the round-trip-time is over a couple of 290 minutes. For example, it typically takes eight round trips for FTP 291 to get to a state where data can begin flowing. Since an FTP server 292 may time out and reset the connection after 5 minutes of inactivity, 293 a conformant standard FTP server could be unusable for communicating 294 even with the closest planets. 296 The SCTP [SCTP] protocol can multiplex bundles (Note : defined 297 differently from the DTN "bundle") for multiple sessions over a 298 single layer connection just as LTP, but still requires multiple 299 round trips prior to transmitting application data for session setup 300 and so clearly does not suit the needs of the IPN operating 301 environment. 303 3. Features 305 The design of LTP differs from that of TCP in several significant 306 ways. The common themes running through these differences are two 307 central design assumptions, both of which amount to making virtues of 308 necessity. 310 First: given the severe innate constraints on interplanetary 311 communication discussed above, we assume that the computational 312 resources available to LTP engines will always be ample compared to 313 the communication resources available on the link between them. 315 Certainly in many cases the computational resources available to a 316 given LTP engine - such as one on board a small robotic spacecraft - 317 will not be ample by the standards of the Internet. But in those 318 cases we expect that the associated communication resources 319 (transmitter power, antenna size) will be even less ample, preserving 320 the expected disproportion between the two. 322 Second, we note that establishing a communication link across 323 interplanetary distance entails enacting several hardware 324 configuration measures based on the presumed operational state of the 325 remote communicating entity like: 327 o orienting a directional antenna correctly; 329 o tuning a transponder to pre-selected transmission and/or 330 reception frequencies; 332 o diverting precious electrical power to the transponder at the 333 last possible moment, and for the minimum necessary length of 334 time. 336 We therefore assume that the operating environment in which LTP 337 functions is able to pass information on the link status (termed 338 "link state cues" in this document) to LTP, telling it which remote 339 LTP engine(s) should currently be transmitting to the local LTP 340 engine and vice versa. The operating environment itself must have 341 this information in order to configure communication link hardware 342 correctly. 344 3.1 Massive State Retention 346 Like any reliable transport service employing ARQ, LTP is "stateful". 347 In order to assure the reception of a block of data it has sent, LTP 348 must retain for possible retransmission all portions of that block 349 which might not have been received yet. In order to do so, it must 350 keep track of which portions of the block are known to have been 351 received so far, and which are not, together with any additional 352 information needed for purposes of retransmitting part or all of that 353 block. 355 Long round-trip times mean substantial delay between the transmission 356 of a block of data and the reception of an acknowledgment from the 357 block's destination, signaling arrival of the block. If LTP 358 postponed transmission of additional blocks of data until it received 359 acknowledgment of the arrival of all prior blocks, valuable 360 opportunities to utilize what little deep space transmission 361 bandwidth is available would be forever lost. 363 For this reason, LTP is based in part on a notion of massive state 364 retention. Any number of requested transmissions may be concurrently 365 "in flight" at various displacements along the link between two LTP 366 engines, and the LTP engines must necessarily retain transmission 367 status and retransmission resources for all of them. Moreover, if 368 any of the data of a given block are lost en route, it will be 369 necessary to retain the state of that transmission during an 370 additional round trip while the lost data are retransmitted; even 371 multiple retransmission cycles may be necessary. 373 In sum, LTP transmission state information persists for a long time 374 because a long time must pass before LTP can be assured of 375 transmission success - so LTP must retain a great deal of state 376 information. Since the alternatives are non-reliability on the one 377 hand and severe under-utilization of transmission opportunities on 378 the other, we believe such massive statefulness is cost-justified 379 (though probably not for all LTP applications). 381 3.1.1 Multiplicity of Protocol State Machines 382 This design decision is reflected in a significant structural 383 difference between LTP and TCP. 385 Both TCP and LTP provide mechanisms for multiplexing access by a 386 variety of higher-layer services or applications: LTP's "client 387 service IDs" correspond to TCP's port numbers. Also, both TCP and 388 LTP implement devices for encapsulating threads of retransmission 389 protocol (protocol state machines): LTP's "sessions" functionally 390 correspond to TCP connections. At any moment each such thread of 391 retransmission protocol is engaged in conveying a single block of 392 data from one protocol end-point to another. 394 However, a single TCP association (local host address, local port 395 number, foreign host address, foreign port number) can accommodate at 396 most one connection at any one time. In contrast, a single LTP 397 association (local engine ID, local client service ID, foreign engine 398 ID, foreign client service ID) can accommodate multiple concurrent 399 sessions, one for each block of data in transit on the association. 401 3.1.2 Session IDs 403 In TCP, the fact that any single association is occupied by at most 404 one connection at any time enables the protocol to use host addresses 405 and port numbers to demultiplex arriving data to the appropriate 406 protocol state machines. LTP's possible multiplicity of sessions per 407 association makes it necessary for each segment of application data 408 to include an additional demultiplexing token, a "session ID" that 409 uniquely identifies the session in which the segment was issued and, 410 implicitly, the block of data being conveyed by this session. 412 3.1.3 Use of Non-volatile Storage 414 Another important implication of massive statefulness is that 415 implementations of LTP should consider retaining transmission state 416 information in non-volatile storage of some kind, such as magnetic 417 disk or flash memory. 419 Transport protocols such as TCP typically retain transmission state 420 in dynamic RAM. If the device on which the software resides is 421 rebooted or powered down then, although all transmissions currently 422 in progress are aborted, the resulting degree of communication 423 disruption is limited because the number of concurrent connections is 424 limited. Rebooting or power-cycling a computer on which an LTP 425 engine resides could abort a much larger number of transmission 426 sessions in various stages of completion, at a much larger total 427 cost. 429 3.2 Absence of Negotiation 430 In the IPN, round-trip times may be so long and communication 431 opportunities so brief that a negotiation exchange, such as an 432 adjustment of transmission rate, might not be completed before 433 connectivity is lost. Even if connectivity is uninterrupted, waiting 434 for negotiation to complete before revising data transmission 435 parameters might well result in costly under-utilization of link 436 resources. 438 For this reason, LTP communication session parameters are asserted 439 unilaterally, subject to application-level network management 440 activity that may not take effect for hours, days, or weeks. 442 3.3 Partial Reliability 444 For environments where application data is not critical, overall link 445 bandwidth utilization may be improved if the data is transmitted on a 446 "best efforts" basis, i.e., without being subject to acknowledgment 447 and retransmission. However, we believe that for many applications, 448 unreliable transmission of data is likely to be useful only if any 449 application headers/meta-data describing the actual data are received 450 reliably. For example, suppose a block transmission involves a high- 451 definition photograph (and can afford to be sent on "best efforts"): 452 the first 40 bytes of the block might be a prologue containing 453 information such as camera settings and time of exposure, without 454 which the photograph data is useless, while the actual photograph 455 data is an array of fixed-length scan lines. In this case the 456 assured delivery of the first 40 bytes of the block is critical for 457 interpreting the data, but the loss of a few individual scan lines 458 may not be important enough to justify the cost of retransmission. A 459 more typical example would be when the bundling protocol [BP] was the 460 upper-layer protocol operating over LTP : if a bundle needs to be 461 transmitted on "best efforts", it would at least be expected to have 462 the bundle protocol header received reliably, or the bundle itself 463 would be meaningless to the receiving bundling node. 465 The motivation for "partially reliable" transmission, as opposed to 466 an alternative unreliable mode, is therefore to provide a mechanism 467 for upper layer protocols to get any of their header and meta-data 468 transmitted reliably (as necessary) but have the actual data 469 transmitted unreliably. LTP regards each block of data as comprising 470 two parts: a "red-part", whose delivery must be assured by 471 acknowledgment and retransmission as necessary, and a "green-part" 472 whose delivery is attempted but not assured. 474 The length of either part may be zero; that is, any given block may 475 be designated entirely red (retransmission continues until reception 476 of the entire block has been asserted by the receiver) or entirely 477 green (no part of the block is acknowledged or retransmitted). Thus 478 LTP can provide both TCP-like and UDP-like functionality concurrently 479 on a single association. 481 Note that in a red-green block transmission, the red-part data does 482 NOT convey any urgency or higher-priority semantics relative to the 483 block's green-part data; the red-part data is merely intended to 484 carry imperative meta-data without which green-part data reception is 485 likely to be futile. 487 3.4 Laconic Acknowledgment 489 Another respect in which LTP differs from TCP is that, while TCP 490 connections are bidirectional (blocks of application data may be 491 flowing in both directions on any single connection), LTP sessions 492 are unidirectional. This design decision derives from the fact that 493 the flow of data in deep space flight missions is usually 494 unidirectional. (Long round trip times make interactive spacecraft 495 operation infeasible, so spacecraft are largely autonomous and 496 command traffic is very light.) 498 One could imagine an LTP instance, upon being asked to transmit a 499 block of data, searching through all existing sessions in hopes of 500 finding one that was established upon reception of data from the new 501 block's destination; transmission of the new block could be 502 piggybacked on the acknowledgment traffic for that session. But the 503 prevailing unidirectionality of space data communications means that 504 such a search would frequently fail and a new unidirectional session 505 would have to be established anyway. Session bidirectionality 506 therefore seemed to entail somewhat greater complexity unmitigated by 507 any clear performance advantage, so we abandoned it. Bidirectional 508 data transfer is instead accomplished by opening two individual LTP 509 sessions. 511 Since they are not piggybacked on data segments, LTP data 512 acknowledgments - "reception reports" - are carried in a separate 513 segment type. To minimize consumption of low and asymmetric 514 transmission bandwidth in the IPN, these report segments are sent 515 infrequently; each one contains a comprehensive report of all data 516 received within some specified range of offsets from the start of the 517 transmitted block. The expectation is that most data segments will 518 arrive safely, so individual acknowledgment of each one would be 519 expensive in information-theoretical terms: the real information 520 provided per byte of acknowledgment data transmitted would be very 521 small. Instead, report segments are normally sent only upon 522 encountering explicit solicitations for reception reports - 523 "checkpoints" - in the sequence of incoming data segments. 525 The aggregate nature of reception reports gives LTP transmission an 526 episodic character: 528 o "Original transmissions" are sequences of data segments issued 529 in response to transmission requests from client services. 531 o "Retransmissions" are sequences of data segments issued in 532 response to the arrival of report segments that indicate 533 incomplete reception. 535 Checkpoints are mandatory only at the end of the red-part of each 536 original transmission and at the end of each retransmission. For 537 applications that require accelerated retransmission (and can afford 538 the additional bandwidth consumption entailed), reception reporting 539 can be more aggressive. Additional checkpoints may optionally be 540 inserted at other points in the red-part of an original transmission, 541 and additional reception reports may optionally be sent on an 542 asynchronous basis during reception of the red-part of an original 543 transmission. 545 3.5 Adjacency 547 TCP reliability is "end to end": traffic between two TCP endpoints 548 may traverse any number of intermediate network nodes, and two 549 successively transmitted segments may travel by entirely different 550 routes to reach the same destination. The underlying IP network- 551 layer protocol accomplishes this routing. Although TCP always 552 delivers data segments to any single port in order and without gaps, 553 the IP datagrams delivered to TCP itself may not arrive in the order 554 in which they were transmitted. 556 In contrast, LTP is a protocol for "point to point" reliability on a 557 single link: traffic between two LTP engines is expected not to 558 traverse any intermediate relays. Point-to-point topology is innate 559 in the nature of deep space communication, which is simply the 560 exchange of radiation between two mutually visible antennae. No 561 underlying network infrastructure is presumed, so no underlying 562 network-layer protocol activity is expected; the underlying 563 communication service is assumed to be a point-to-point link-layer 564 protocol such as CCSDS Telemetry/Telecommand [TM][TC] (or, for 565 terrestrial applications, PPP). The contents of link-layer frames 566 delivered to LTP are always expected to arrive in the order in which 567 they were transmitted, though possibly with any number of gaps due to 568 data loss or corruption. 570 Note that building an interplanetary network infrastructure - the 571 DTN-based architecture of the IPN - *on top of* LTP does not conflict 572 with LTP design principles. Bundling functions as an overlay network 573 protocol, and LTP bears essentially the same relationship to Bundling 574 that a reliable link protocol (for example, the ARQ capabilities of 575 LLC) would bear to IP. 577 The design of LTP relies heavily on this topological premise, in at 578 least two ways: 580 If two successively transmitted segments could travel by 581 materially different routes to reach the same destination, then 582 the assumption of rough determinism in timeout interval 583 computation discussed below would not hold. Our inability to 584 estimate timeout intervals with any accuracy would severely 585 compromise performance; while spurious timeouts cause redundant 586 retransmissions wasting precious bandwidth, overly conservative 587 timeout intervals delay loss recovery. 589 If data arrived at an LTP engine out of transmission order, then 590 the assumptions based on which the rules for reception reporting 591 are designed would no longer hold. A more complex and/or less 592 efficient retransmission mechanism would be needed. 594 3.6 Optimistic and Dynamic Timeout Interval Computation 596 TCP determines timeout intervals by measuring and recording actual 597 round trip times, then applying statistical techniques to recent RTT 598 history to compute a predicted round trip time for each transmitted 599 segment. 601 The problem is at once both simpler and more difficult for LTP: 603 Since multiple sessions can be conducted on any single 604 association, retardation of transmission on any single session 605 while awaiting a timeout need not degrade communication 606 performance on the association as a whole. Timeout intervals that 607 would be intolerably optimistic in TCP don't necessarily degrade 608 LTP's bandwidth utilization. 610 But the reciprocal half-duplex nature of LTP communication makes 611 it infeasible to use statistical analysis of round-trip history as 612 a means of predicting round-trip time. The round-trip time for 613 transmitted segment N could easily be orders of magnitude greater 614 than that for segment N-1 if there happened to be a transient loss 615 of connectivity between the segment transmissions. 617 Since statistics derived from round-trip history cannot safely be 618 used as a predictor of LTP round-trip times, we have to assume that 619 round-trip timing is at least roughly deterministic - i.e., that 620 sufficiently accurate RTT estimates can be computed individually in 621 real time from available information. 623 This computation is performed in two stages: 625 We calculate a first approximation of RTT by simply doubling the 626 known one-way light time to the destination and adding an 627 arbitrary margin for any additional anticipated latency, such as 628 queuing and processing delay at both ends of the transmission. 629 For deep space operations, the margin value is typically a small 630 number of whole seconds. Although such a margin is enormous by 631 Internet standards, it is insignificant compared to the two-way 632 light time component of round-trip time in deep space. We choose 633 to risk tardy retransmission, which will postpone delivery of one 634 block by a relatively tiny increment, in preference to premature 635 retransmission, which will unnecessarily consume precious 636 bandwidth and thereby degrade overall performance. 638 Then, to account for the additional delay imposed by interrupted 639 connectivity, we dynamically suspend timers during periods when 640 the relevant remote LTP engines are known to be unable to transmit 641 responses. This knowledge of the operational state of remote 642 entities is assumed to be provided by link state cues from the 643 operating environment, as discussed earlier. 645 3.7 Deferred Transmission 647 Link state cues also notify LTP when it is and isn't possible to 648 transmit segments by passing them to the underlying communication 649 service. 651 Continuous duplex communication is the norm for TCP operations in the 652 Internet; when communication links are not available, TCP simply does 653 not operate. In deep space communications, however, at no moment can 654 there ever be any expectation of two-way connectivity. It is always 655 possible for LTP to be generating outbound segments - in response to 656 received segments, timeouts, or requests from client services - that 657 cannot immediately be transmitted. These segments must be queued for 658 transmission at a later time when a link has been established, as 659 signaled by a link state cue. 661 4. Overall Operation 663 4.1 Nominal Operation 665 The nominal sequence of events in an LTP transmission session is as 666 follows. 668 Operation begins when a client service instance asks an LTP engine to 669 transmit a block to a remote client service instance. The sending 670 engine opens a Sending State Record (SSR) for a new session, thereby 671 starting the session, and notifies the client service instance that 672 the session has been started. The sending engine then initiates an 673 original transmission: it queues for transmission as many data 674 segments as are necessary to transmit the entire block, within the 675 constraints on maximum segment size imposed by the underlying 676 communication service. The last segment of the red-part of the block 677 is marked as the End of Red-Part (EORP) indicating the end of red- 678 part data for the block, and as a checkpoint indicating that the 679 receiving engine must issue a reception report upon receiving the 680 segment. The last segment of the block overall is marked End of 681 Block (EOB) indicating that the receiving engine can calculate the 682 size of the block by summing the offset and length of the data in the 683 segment. 685 At the next opportunity, subject to allocation of bandwidth to the 686 queue into which the block data segments were written, the enqueued 687 segments are transmitted to the LTP engine serving the remote client 688 service instance. A timer is started for the EORP, so that it can be 689 retransmitted automatically if no response is received. 691 On reception of the first data segment for the block, the receiving 692 engine opens a Receiving State Record (RSR) for the new session and 693 notifies the local instance of the relevant client service that the 694 session has been started. In the nominal case it receives all 695 segments of the original transmission without error. Therefore on 696 reception of the EORP data segment it responds by (a) queuing for 697 transmission to the sending engine a report segment indicating 698 complete reception and (b) delivering the received red-part of the 699 block to the local instance of the client service; on reception of 700 each data segment of the green-part, it responds by immediately 701 delivering the received data to the local instance of the client 702 service. 704 At the next opportunity, the enqueued report segment is immediately 705 transmitted to the sending engine and a timer is started so that the 706 report segment can be retransmitted automatically if no response is 707 received. 709 The sending engine receives the report segment, turns off the timer 710 for the EORP, enqueues for transmission to the receiving engine a 711 report-acknowledgment segment, notifies the local client service 712 instance that the red-part of the block has been successfully 713 transmitted, and closes the SSR for the session. 715 At the next opportunity, the enqueued report-acknowledgment segment 716 is immediately transmitted to the receiving engine. 718 The receiving engine receives the report-acknowledgment segment, 719 turns off the timer for the report segment, and closes the RSR for 720 the session. 722 Closing both the SSR and RSR for a session terminates the session. 724 4.2 Retransmission 726 Loss or corruption of transmitted segments may cause the operation of 727 LTP to deviate from the nominal sequence of events described above. 729 Loss of one or more red-part data segments other than the EORP 730 segment triggers data retransmission: 732 Rather than returning a single reception report indicating complete 733 reception, the receiving engine returns a reception report comprising 734 as many report segments as are needed in order to report in detail on 735 all red-part data reception for this session (other than data 736 reception that was previously reported in response to any 737 discretionary checkpoints, described later), within the constraints 738 on maximum segment size imposed by the underlying communication 739 service. [Still, only one report segment is normally returned; 740 multiple report segments are needed only when a large number of 741 segments comprising non-contiguous intervals of red-part block data 742 are lost or when the receiver-to-sender path MTU is small.] A timer 743 is started for each report segment. 745 On reception of each report segment the sending engine responds as 746 follows : 748 It turns off the timer for the checkpoint referenced by the report 749 segment, if any. 751 It enqueues a reception-acknowledgment segment acknowledging the 752 report segment (to turn off the report retransmission timer at the 753 receiving engine). This segment is sent immediately at the next 754 transmission opportunity. 756 If the reception claims in the report segment indicate that not 757 all data within the scope have been received, it normally 758 initiates a retransmission by enqueuing all data segments not yet 759 received. The last such segment is marked a checkpoint and 760 contains the report serial number of the report segment to which 761 the retransmission is a response. These segments are likewise 762 sent at the next transmission opportunity, but only after all data 763 segments previously queued for transmission to the receiving 764 engine have been sent. A timer is started for the checkpoint, so 765 that it can be retransmitted automatically if no responsive report 766 segment is received. 768 On the other hand, if the reception claims in the report segment 769 indicate that all data within the scope of the report segment have 770 been received, and the union of all reception claims received so 771 far in this session indicate that all data in the red-part of the 772 block have been received, then the sending engine notifies the 773 local client service instance that the red-part of the block has 774 been successfully transmitted and closes the SSR for the session. 776 On reception of a checkpoint segment with a non-zero report serial 777 number, the receiving engine responds as follows : 779 It first turns off the timer for the referenced report segment. 781 It then returns a reception report comprising as many report 782 segments as are needed in order to report in detail on all data 783 reception within the scope of the referenced report segment, 784 within the constraints on maximum segment size imposed by the 785 underlying communication service; a timer is started for each 786 report segment. 788 If at this point all data in the red-part of the block have been 789 received, the receiving engine delivers the received block's red- 790 part to the local instance of the client service and, upon 791 reception of reception-acknowledgment segments acknowledging all 792 report segments, closes the RSR for the session. Otherwise the 793 data retransmission cycle continues. 795 Loss of any checkpoint segment or of the responsive report segment 796 causes the checkpoint timer to expire. When this occurs, the sending 797 engine normally retransmits the checkpoint segment. Similarly, loss 798 of a report segment or of the responsive report-acknowledgment 799 segment causes the report segment's timer to expire. When this 800 occurs, the receiving engine normally retransmits the report segment. 802 Note that reception of a previously received report segment that was 803 retransmitted due to loss of the report-acknowledgment segment causes 804 another responsive report-acknowledgment segment to be transmitted, 805 but is otherwise ignored; if any of the data retransmitted in 806 response to the previously received report segment were lost, further 807 retransmission of those data will be requested by one or more new 808 report segments issued in response to that earlier retransmission's 809 checkpoint. Thus unnecessary retransmission is suppressed. 811 Note also that the responsibility for responding to segment loss in 812 LTP is shared between the sender and receiver of a block: the sender 813 retransmits checkpoint segments in response to checkpoint timeouts, 814 and it retransmits missing data in response to reception reports 815 indicating incomplete reception, while the receiver additionally 816 retransmits report segments in response to timeouts. An alternative 817 design would have been to make the sender responsible for all 818 retransmission, in which case the receiver would not expect report- 819 acknowledgment segments and would not retransmit report segments. 820 There are two disadvantages to this approach: 822 First, because of constraints on segment size that might be 823 imposed by the underlying communication service, it is at least 824 remotely possible that the response to any single checkpoint might 825 be multiple report segments. An additional sender-side mechanism 826 for detecting and appropriately responding to the loss of some 827 proper subset of those reception reports would be needed. We 828 believe the current design is simpler. 830 Second, an engine that receives a block needs a way to determine 831 when the session can be closed. In the absence of explicit final 832 report acknowledgment (which entails retransmission of the report 833 in case of the loss of the report acknowledgment), the 834 alternatives are (a) to close the session immediately on 835 transmission of the report segment that signifies complete 836 reception and (b) to close the session on receipt of an explicit 837 authorization from the sender. In case (a), loss of the final 838 report segment would cause retransmission of a checkpoint by the 839 sender, but the session would no longer exist at the time the 840 retransmitted checkpoint arrived; the checkpoint could reasonably 841 be interpreted as the first data segment of a new block, most of 842 which was lost in transit, and the result would be redundant 843 retransmission of the entire block. In case (b), the explicit 844 session termination segment and the responsive acknowledgment by 845 the receiver (needed in order to turn off the timer for the 846 termination segment, which in turn would be needed in case of in- 847 transit loss or corruption of that segment) would somewhat 848 complicate the protocol, increase bandwidth consumption, and 849 retard the release of session state resources at the sender. Here 850 again we believe that the current design is simpler and more 851 efficient. 853 4.3 Accelerated Retransmission 855 Data segment retransmission occurs only on receipt of a report 856 segment indicating incomplete reception; report segments are normally 857 transmitted only at the end of original transmission of the red-part 858 of a block or at the end of a retransmission. For some applications 859 it may be desirable to trigger data segment retransmission 860 incrementally during the course of red-part original transmission so 861 that the missing segments are recovered sooner. This can be 862 accomplished in two ways: 864 Any red-part data segment prior to the EORP can additionally be 865 flagged as a checkpoint. Reception of each such "discretionary" 866 checkpoint causes the receiving engine to issue a reception 867 report. 869 At any time during the original transmission of a block's red-part 870 (that is, prior to reception of any data segment of the block's 871 green-part), the receiving engine can unilaterally issue 872 additional asynchronous reception reports. Note that the CFDP 873 protocol's "Immediate" mode is an example of this sort of 874 asynchronous reception reporting [Sec 6]. The reception reports 875 generated for accelerated retransmission are processed in exactly 876 the same way as the standard reception reports. 878 4.4 Session Cancellation 880 A transmission session may be canceled by either the sending or the 881 receiving engine in response either to a request from the local 882 client service instance or to an LTP operational failure as noted 883 earlier. Session cancellation is accomplished as follows. 885 The canceling engine deletes all currently queued segments for the 886 session and notifies the local instance of the affected client 887 service that the session is canceled. If no segments for this 888 session have yet been sent to or received from the corresponding LTP 889 engine, then at this point the canceling engine simply closes its 890 state record for the session and cancellation is complete. 892 Otherwise, a session cancellation segment is queued for transmission. 893 At the next opportunity, the enqueued cancellation segment is 894 immediately transmitted to the LTP engine serving the remote client 895 service instance. A timer is started for the segment, so that it can 896 be retransmitted automatically if no response is received. 898 The corresponding engine receives the cancellation segment, enqueues 899 for transmission to the canceling engine a cancellation- 900 acknowledgment segment, deletes all other currently queued segments 901 for the indicated session, notifies the local client service instance 902 that the block has been canceled, and closes its state record for the 903 session. 905 At the next opportunity, the enqueued cancellation-acknowledgment 906 segment is immediately transmitted to the canceling engine. 908 The canceling engine receives the cancellation-acknowledgment, turns 909 off the timer for the cancellation segment, and closes its state 910 record for the session. 912 Loss of a cancellation segment or of the responsive cancellation- 913 acknowledgment causes the cancellation segment timer to expire. When 914 this occurs, the canceling engine normally retransmits the 915 cancellation segment. 917 5. Functional Model 919 The functional model underlying the specification of LTP is one of 920 deferred, opportunistic transmission, with access to the active 921 transmission link apportioned between two (conceptual) outbound 922 traffic queues. The accuracy of LTP retransmission timers depend in 923 large part on a faithful adherence to this model. 925 5.1 Deferred Transmission 927 In concept, every outbound LTP segment is appended to one of two 928 queues -- forming a "queue set" -- of traffic bound for the LTP 929 engine that is that segment's destination. One such traffic queue is 930 the "internal operations queue" of that queue set; the other is the 931 application data queue for the queue set. The de-queuing of a 932 segment always implies delivering it to the underlying communication 933 system for immediate transmission. Whenever the internal operations 934 queue is non-empty, the oldest segment in that queue is the next 935 segment de-queued for transmission to the destination; at all other 936 times, the oldest segment in the application data queue is the next 937 segment de-queued for transmission to the destination. 939 The production and enqueuing of a segment and the subsequent actual 940 transmission of that segment are in principle wholly asynchronous. 942 In the event that (a) a transmission link to the destination is 943 currently active and (b) the queue to which a given outbound segment 944 is appended is otherwise empty and (c) either this queue is the 945 internal operations queue or else the internal operations queue is 946 empty, the segment will be transmitted immediately upon production. 947 Transmission of a newly queued segment is necessarily deferred in all 948 other circumstances. 950 Conceptually, the de-queuing of segments from traffic queues bound 951 for a given destination is initiated upon reception of a link state 952 cue indicating that the underlying communication system is now 953 transmitting to that destination, i.e., the link to that destination 954 is now active. It ceases upon reception of a link state cue 955 indicating that the underlying communication system is no longer 956 transmitting to that destination, i.e., the link to that destination 957 is no longer active. 959 5.2 Timers 960 LTP relies on accurate calculation of expected arrival times for 961 report and acknowledgment segments in order to know when proactive 962 retransmission is required. If a calculated time were even slightly 963 early, the result would be costly unnecessary retransmission. On the 964 other hand, calculated arrival times may safely be several seconds 965 late: the only penalties for late timeout and retransmission are 966 slightly delayed data delivery and slightly delayed release of 967 session resources. 969 The following discussion is the basis for LTP's expected arrival time 970 calculations. 972 The total time consumed in a single "round trip" (transmission and 973 reception of the original segment, followed by transmission and 974 reception of the acknowledging segment) has the following components: 976 Protocol processing time: The time consumed in issuing the 977 original segment, receiving the original segment, generating and 978 issuing the acknowledging segment, and receiving the acknowledging 979 segment. 981 Outbound queuing delay: The delay at the sender of the original 982 segment while that segment is in a queue waiting for transmission, 983 and delay at the sender of the acknowledging segment while that 984 segment is in a queue waiting for transmission. 986 Radiation time: The time that passes while all bits of the 987 original segment are being radiated, and the time that passes 988 while all bits of the acknowledging segment are being radiated. 989 (This is significant only at extremely low data transmission 990 rates.) 992 Round-trip light time: The signal propagation delay at the speed 993 of light, in both directions. 995 Inbound queuing delay: delay at the receiver of the original 996 segment while that segment is in a reception queue, and delay at 997 the receiver of the acknowledging segment while that segment is in 998 a reception queue. 1000 Delay in transmission of the original segment or the acknowledging 1001 segment due to loss of connectivity - that is, interruption in 1002 outbound link activity at the sender of either segment due to 1003 occultation, scheduled end of tracking pass, etc. 1005 In this context, where errors on the order of seconds or even minutes 1006 may be tolerated, protocol processing time at each end of the session 1007 is assumed to be negligible. 1009 Inbound queuing delay is also assumed to be negligible because, even 1010 on small spacecraft, LTP processing speeds are high compared to data 1011 transmission rates. 1013 Two mechanisms are used to make outbound queuing delay negligible: 1015 The expected arrival time of an acknowledging segment is not 1016 calculated until the moment the underlying communication system 1017 notifies LTP that radiation of the original segment has begun. 1018 All outbound queuing delay for the original segment has already 1019 been incurred at that point. 1021 LTP's deferred transmission model [Sec 5.1] minimizes latency in 1022 the delivery of acknowledging segments (reports and 1023 acknowledgments) to the underlying communication system; that is, 1024 acknowledging segments are (in concept) appended to the internal 1025 operations queue rather than the application data queue, so they 1026 have higher transmission priority than any other outbound 1027 segments, i.e., they should always be de-queued for transmission 1028 first. This limits outbound queuing delay for a given 1029 acknowledging segment to the time needed to de-queue and radiate 1030 all previously generated acknowledging segments that have not yet 1031 been de-queued for transmission. Since acknowledging segments are 1032 sent infrequently and are normally very small, outbound queuing 1033 delay for a given acknowledging segment is likely to be minimal. 1035 Deferring calculation of the expected arrival time of the 1036 acknowledging segment until the moment at which the original segment 1037 is radiated has the additional effect of removing from consideration 1038 any original segment transmission delay due to loss of connectivity 1039 at the original segment sender. 1041 Radiation delay at each end of the session is simply segment size 1042 divided by transmission data rate. It is insignificant except when 1043 data rate is extremely low (for example, 10 bps), in which case the 1044 use of LTP may well be inadvisable for other reasons (LTP header 1045 overhead for example, could be too much under such data rates). 1046 Therefore radiation delay is normally assumed to be negligible. 1048 We assume that one-way light time to the nearest second can always be 1049 known (for example, provided by the operating environment). 1051 So the initial expected arrival time for each acknowledging segment 1052 is typically computed as simply the current time at the moment that 1053 radiation of the original segment begins, plus twice the one-way 1054 light time, plus 2*N seconds of margin to account for processing and 1055 queuing delays and for radiation time at both ends. N is a parameter 1056 set by network management for which 2 seconds seem to be a reasonable 1057 default value. 1059 This leaves only one unknown, the additional round trip time 1060 introduced by loss of connectivity at the sender of the acknowledging 1061 segment. To account for this, we again rely on external link state 1062 cues. Whenever interruption of transmission at a remote LTP engine 1063 is signaled by a link state cue, we suspend the countdown timers for 1064 all acknowledging segments expected from that engine. Upon a signal 1065 that transmission has resumed at that engine, we resume those timers 1066 after (in effect) adding to each expected arrival time the length of 1067 the timer suspension interval. 1069 6. Tracing LTP back to CFDP 1071 LTP in effect implements most of the "core procedures" of the CCSDS 1072 File Delivery Protocol (CFDP) specification, minus flow labels and 1073 all features that are specific to operations on files and filestores 1074 (file systems). In the IPN architecture we expect that file and 1075 filestore operations will be conducted by file transfer application 1076 protocols -- notably CFDP itself -- operating on top of the DTN 1077 Bundling protocol. Bundling's QoS features serve the same purposes 1078 as CFDP's flow labels, so flow labeling is omitted from LTP. 1079 Bundling in effect implements the CFDP "extended procedures" in a 1080 more robust and scalable manner than is prescribed by the CFDP 1081 standard. 1083 The fundamental difference between LTP and CFDP is that, while CFDP 1084 delivers named files end-to-end, LTP is used to transmit arbitrary, 1085 unnamed blocks of data point-to-point. 1087 Some differences between LTP and CFDP are simply matters of 1088 terminology; the following table summarizes the correspondences in 1089 language between the two. 1091 --------------LTP------------- ------------CFDP----------- 1093 LTP engine CFDP entity 1095 Segment Protocol Data Unit (PDU) 1097 Reception Report NAK 1099 Client service request Service request 1101 Client service notice Service indication 1103 CFDP specifies four mechanisms for initiating data retransmission, 1104 called "lost segment detection modes". LTP effectively supports all 1105 four: 1107 "Deferred" mode is implemented in LTP by the Request flag in the 1108 EORP data segment, which triggers reception reporting upon receipt 1109 of the EORP. 1111 "Prompted" mode is implemented in LTP by turning on Request flags 1112 in data segments that precede the EORP; these additional 1113 checkpoints trigger interim reception reporting under the control 1114 of the source LTP engine. 1116 "Asynchronous" mode is implemented in LTP by the autonomous 1117 production, under locally specified conditions, of additional 1118 reception reports prior to arrival of the EORP. 1120 "Immediate" mode is simply a special case of asynchronous mode, 1121 where the condition that triggers autonomous reception reporting 1122 is detection of a gap in the incoming data. 1124 CFDP uses a cyclic timer to iterate reception reporting until 1125 reception is complete. Because choosing a suitable interval for such 1126 a timer is potentially quite difficult, LTP instead flags the last 1127 data segment of each retransmission as a checkpoint, sent reliably; 1128 the cascading reliable transmission of checkpoint and RS segments 1129 assures the continuous progress of the transmission session. 1131 As the following table indicates, most of the functions of CFDP PDUs 1132 are accomplished in some way by LTP segments. 1134 --------------LTP------------- -------------CFDP---------- 1136 Data segments File data and metadata PDUs 1138 Flags on data segments EOF (Complete), Prompt (NAK), 1139 Prompt (Keep Alive) 1141 Report segment ACK (EOF Complete), NAK, 1142 Keep Alive, Finished (Complete) 1144 Report-acknowledgment ACK (Finished Complete) 1146 Cancel segment EOF (Cancel, Protocol Error) 1147 Finished (Cancel, Protocol Error) 1149 Cancellation Acknowledgment ACK (EOF (Cancel, Protocol Error), 1150 Finished (Cancel, Protocol Error)) 1152 But some CFDP PDUs have no LTP equivalent because in an IPN 1153 architecture they will likely be implemented elsewhere. CFDP's EOF 1154 (Filestore error) and Finished (Filestore error) PDUs would be 1155 implemented in an IPN application-layer file transfer protocol, e.g., 1156 CFDP itself. CFDP's Finished [End System] PDU is a feature of the 1157 Extended Procedures, which would in effect be implemented by the 1158 Bundling protocol. 1160 7. Security Considerations 1162 Not relevant for this document. 1164 8. IANA Considerations 1166 Not relevant for this document. Please follow the IANA Considerations 1167 sections of the internet-drafts on the series (main protocol 1168 specification and protocol extensions). 1170 9. Acknowledgments 1172 Many thanks to Tim Ray, Vint Cerf, Bob Durst, Kevin Fall, Adrian 1173 Hooke, Keith Scott, Leigh Torgerson, Eric Travis, and Howie Weiss for 1174 their thoughts on this protocol and its role in Delay-Tolerant 1175 Networking architecture. 1177 Part of the research described in this document was carried out at 1178 the Jet Propulsion laboratory, California Institute of Technology, 1179 under a contract with the National Aeronautics and Space 1180 Administration. This work was performed under DOD Contract DAA-B07- 1181 00-CC201, DARPA AO H912; JPL Task Plan No. 80-5045, DARPA AO H870; 1182 and NASA Contract NAS7-1407. 1184 Thanks are also due to Shawn Ostermann, Hans Kruse, and Dovel Myers 1185 at Ohio University for their suggestions and advice in making various 1186 design decisions. 1188 Part of this work was carried out at Trinity College Dublin as part 1189 of the SeNDT contract funded by Enterprise Ireland's research 1190 innovation fund. 1192 10. References 1194 10.1 Normative References 1196 [B97] S. Bradner, "Key words for use in RFCs to Indicate Requirement 1197 Levels", BCP 14, RFC 2119, March 1997. 1199 [LTP] Ramadas, M., Burleigh, S., and Farrell, S., "Licklider 1200 Transmission Protocol - Specification", draft-irtf-dtnrg-ltp-03.txt 1201 (Work in Progress), July 2005. 1203 [LTPEXT] Farrell, S., Ramadas, M., and Burleigh, S., "Licklider 1204 Transmission Protocol - Extensions", draft-irtf-dtnrg-ltp- 1205 extensions-01.txt (Work in Progress), July 2005. 1207 10.2 Informative References 1209 [BP] K. Scott, and S. Burleigh, "Bundle Protocol Specification", Work 1210 in Progress, October 2003. 1212 [CCSDS] Consultative Committee for Space Data Systems web page, 1213 "http://www.ccsds.org". 1215 [CFDP] CCSDS File Delivery Protocol (CFDP). Recommendation for Space 1216 Data System Standards, CCSDS 727.0-B-2 BLUE BOOK Issue 1, October 1217 2002. 1219 [DSN] Deep Space Mission Systems Telecommunications Link Design 1220 Handbook (810-005) web-page, 1221 "http://eis.jpl.nasa.gov/deepspace/dsndocs/810-005/" 1223 [DTN] K. Fall, "A Delay-Tolerant Network Architecture for Challenged 1224 Internets", In Proceedings of ACM SIGCOMM 2003, Karlsruhe, Germany, 1225 Aug 2003. 1227 [IPN] InterPlanetary Internet Special Interest Group web page, 1228 "http://www.ipnsig.org". 1230 [TFRC] M. Handley, S. Floyd, J. Padhye, and J. Widmer, "TCP Friendly 1231 Rate Control (TFRC): Protocol Specification", RFC 3448, January 2003. 1233 [TM] Packet Telemetry Specification. Recommendation for Space Data 1234 System Standards, CCSDS 103.0-B-2 BLUE BOOK Issue 2, June 2001. 1236 [TC] Telecommand Part 2 - Data Routing Service. Recommendation for 1237 Space Data System Standards, CCSDS 202.0-B-3 BLUE BOOK Issue 3, June 1238 2001. 1240 [ECS94] D. Eastlake, S. Crocker, and J. Schiller, "Randomness 1241 Recommendations for Security", RFC 1750, December 1994. 1243 [SCTP] R. Stewart et al, "Stream Control Transmission Protocol", RFC 1244 2960, October 2000. 1246 11. Author's Addresses 1248 Scott C. Burleigh 1249 Jet Propulsion Laboratory 1250 4800 Oak Grove Drive 1251 M/S: 179-206 1252 Pasadena, CA 91109-8099 1253 Telephone +1 (818) 393-3353 1254 FAX +1 (818) 354-1075 1255 Email Scott.Burleigh@jpl.nasa.gov 1257 Manikantan Ramadas 1258 Internetworking Research Group 1259 301 Stocker Center 1260 Ohio University 1261 Athens, OH 45701 1262 Telephone +1 (740) 593-1562 1263 Email mramadas@irg.cs.ohiou.edu 1265 Stephen Farrell 1266 Distributed Systems Group 1267 Computer Science Department 1268 Trinity College Dublin 1269 Ireland 1270 Telephone +353-1-608-3070 1271 Email stephen.farrell@cs.tcd.ie 1273 12. Copyright Statement 1275 Copyright (C) The Internet Society (2005). This document is subject 1276 to the rights, licenses and restrictions contained in BCP 78, and 1277 except as set forth therein, the authors retain all their rights." 1279 This document and the information contained herein are provided on an 1280 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1281 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1282 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1283 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1284 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1285 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.