idnits 2.17.1 draft-yong-sarp-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-05-01) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 7 instances of too long lines in the document, the longest one being 3 characters in excess of 72. ** There are 39 instances of lines with control characters in the document. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 7 instances of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 32 has weird spacing: '...col for suppo...' == Line 39 has weird spacing: '...to form multi...' == Line 146 has weird spacing: '...ss into an Et...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 20, 1996) is 10116 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '4' on line 516 looks like a reference -- Missing reference section? '3' on line 513 looks like a reference -- Missing reference section? '2' on line 510 looks like a reference -- Missing reference section? '1' on line 507 looks like a reference -- Missing reference section? '5' on line 519 looks like a reference -- Missing reference section? '0' on line 453 looks like a reference Summary: 10 errors (**), 0 flaws (~~), 6 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Yong Xie 3 Internet-Draft Myung J. Lee 4 Tarek N. Saadawi 5 The City College of New York 6 August 20, 1996 8 Sink-Assisted Routing Protocol (SARP) For General IP Multicasting 10 12 Status of this Memo 14 This document is an Internet-Draft. Internet-Drafts are working 15 documents of the Internet Engineering Task Force (IETF), its areas, 16 and its working groups. Note that other groups may also distribute 17 working documents as Internet-Drafts. 19 Internet-Drafts are draft documents valid for a maximum of six months 20 and may be updated, replaced, or obsoleted by other documents at any 21 time. It is inappropriate to use Internet-Drafts as reference 22 material or to cite them other than as ``work in progress.'' 24 To learn the current status of any Internet-Draft, please check the 25 1id-abstracts.txt listing contained in the Internet-Drafts Shadow 26 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 27 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or 28 ftp.isi.edu (US West Coast). 30 Abstract 32 This Internet-Draft proposes a routing protocol for supporting 33 real-time multicast service over the Internet. The work is 34 motivated by the fact that the existing multicast routing protocols 35 are designed under the assumption of symmetric paths, which is not 36 necessarily true especially in the real-time applications. 37 The essence of the proposed protocol is using the inter-subnet 38 Routing Guide Message (RGM) flows generated by sink-subnets 39 to form multicast spanning trees. 41 The underlying algorithms enable source and intermediate nodes 42 to construct minimal end-to-end delay paths to sink-subnets 43 based on the path and the session information conveyed by 44 incident RGM-flows. 46 1. Introduction 48 The primary function of a multicast routing protocol is to select a 49 subset of the Internet topology to form the group-specific spanning 50 trees over which multicast datagrams are delivered from their 51 sources to their sinks. Practically, a multipoint-to-multipoint tree 52 is constructed by multiple point-to-multipoint subtrees. Multicast 53 routers need to look up the actual source or the sinks of a 54 particular datagram in order to make the forwarding decisions. 56 There have been a number of multicast routing protocols and 57 associated algorithms proposed by the Internet community. A 58 typical example is the Distance Vector Multicast Routing Protocol 59 (DVMRP) [4] which employs the Reverse Path Multicast (RPM) algorithm 60 in the latest mrouted version 3.5 and some vender implementations. As 61 well surveyed by [3], RPM is an enhancement to the Reverse Path 62 Broadcast (RPB) and the Truncated Reverse Path Broadcasting (TRPB). 64 RPM defines a way to form the source-rooted spanning trees. The 65 distance back to the source of datagrams is the main factor in 66 determining the forwarding paths. It assumes the symmetricity of 67 paths which becomes a fundamental limitation of entire "reverse 68 optimal path" routing algorithm family, including RPB, TRPB, and 69 RPM. In fact, a shortest path does not necessarily guarantee 70 a shortest reverse path because of the possibility that a 71 full-duplex link could have substantially imbalanced loads on each 72 direction. Furthermore, using fixed "metrics" (e.g., hop count) to 73 compare alternative routes is inappropriate for the situations where 74 routes need to be chosen based on real-time parameters 75 (e.g., real-time delay)(see [2]). These two problems, symmetric path 76 assumption and fixed metric, poses significant challenge for any IP 77 multicast routing protocols should they support real-time applications. 78 The proposed Sink-Assisted Routing Protocol (SARP) is an attempt to 79 address aforementioned two issues. That is, SARP focuses on 80 computing the minimal-delay path from sources to sinks using 81 a real-time delay metric. In contrast, DVMRP is derived from 82 the Routing Information Protocol (RIP) and concerned with computing 83 the shortest path, in terms of fixed metrics, back to the source of a 84 multicast datagram. In short, SARP is to form sink-rooted rather than 85 source-rooted spanning trees using the inter-gateway primitive called 86 the Routing Guide Message (RGM). 88 In order to search for potential multicast source groups, it is 89 assumed that the sink-subnet periodically generates and multicasts 90 RGMs to every possible subnets within a limited scope of the 91 Internet. A source-subnet starts multicasting datagrams out only 92 after intercepting the corresponding RGM flow(s). The information 93 conveyed by a RGM includes a list of the requested multicast groups 94 on behalf of the entire sink-subnet, and the path status as well. 96 RGMs are to be forwarded using a typical Reverse Path Broadcast 97 algorithm, except that the optimal path is determined based on the 98 measure of Reverse-Relative-Delay (RRD). In other words, RGMs are 99 routed along the paths whose reverse paths are optimal in term of 100 real-time delay back to the sink-subnets. 102 Currently, multicast service is considered connectionless, due to the 103 genetic property of IP. Conventional higher layer handshake-type 104 setup of point-to-point connection is often too expensive for most 105 real-time and multicast applications. Also, flow control and error 106 correction could be really complicated when maintaining a dynamic 107 multicast session group. Nevertheless, there is an emerging demand 108 for a quality assured IP multicast service, which requires certain 109 degree of connectivity between sources and sinks. 111 Towards that end, SARP is also intended to introduce the concept 112 of "loose-IP-connectivity". That is, the logical connectivity between 113 two IP end-points can be loosely defined by the continuity of 114 end-to-end control-message-flow, while using the basic IP support. 115 In SARP, multicast spanning tree is completely specified by the 116 incidence relations between the RGM-flows and the source nodes. 117 The sink-nodes send periodic RGMs as requests, wait for data arrival, 118 and regard the data arrival as receiving the natural acknowledgement, 119 while a source-node is aware of the existence of the sink-nodes and 120 maintains transmission only if RGM-flows are engaged, giving a sense 121 of data connection. We strongly believe that, as far as multicasting 122 and routing are part of the IP functions, it is necessary to extend 123 IP to provide an unacknowledged connection-oriented service. 124 First, such service is beneficial to IP multicasting in which the 125 point-to-point addressing is impossible and the data connection is 126 more important than the logical connection. Second, it is capable of 127 conjoining the routing and the resource reservation, which are 128 naturally two inseparable issues. 130 Throughout this draft, the term "IP-entity" is used generally to cover 131 a single network or a subnet, in which all hosts share the same subnet 132 IP address. It has at least one full-duplex link to its neighbor. 133 The access point of an IP-entity to the link is called the port. 134 The abstraction of the Internet can be considered as a planar graph 135 with a set of IP-entities as the nodes, and pairs of the directed 136 edges connecting adjacent nodes. Thus, we shall frequently call 137 the IP entity as a node, for example, the source-node, the sink-node, 138 and the intermediate-node. 140 2. Model of Operation 142 Multicast routing within an entity is transparent to IP. That means, 143 as long as a datagram remains on a subnet, it can reach any sink 144 application within the subnet using the technology that is specific 145 to that subnetwork. For example, on the Ethernet, multicasting is 146 handled by mapping a class-D address into an Ethernet address. On 147 connection-oriented networks, such as an ATM network, it can be 148 done by using some signaling mechanism to establish the host-to-host 149 or host-to-gateway Virtual Circuits (VCs). 150 As with the task of IP multicast routing being primarily a matter of 151 finding gateways among networks, the proposed protocol is concerned 152 only with routing multicast datagrams across an internetwork, 153 along the minimal-delay paths from source-subnets to sink-subnets. 155 SARP depends on the participations of the sinks to establish the 156 group-specific multicast spanning trees. According the standardized 157 host implementation for supporting IP multicasting [1], incoming 158 multicast IP datagrams are received by upper-layer protocol modules 159 using the same "Receive IP" operation as for normal unicast datagram, 160 an application process need to explicitly ask its IP module to join 161 the group through a service interface. In addition, SARP assumes 162 that the process claim itself as the "sink" by opening the file 163 descriptor in a read or read/write mode. The host IP module must be 164 extended to inform the multicast session information to the subnet- 165 router, so that the routing protocol module is able to summarize the 166 demands of sub-hosts into the per subnet sink-group-table. As long as 167 the table is not empty, RGMs are generated. 169 3. Algorithms for SARP 171 3.1. Measuring the Reverse-Relative-Delay of a Path 173 Considering the real-time properties of most multicast applications, 174 it is desirable that the distance between two nodes is quantified in 175 term of the real-time delay. The absolute delay, however, is costly 176 to measure since network clocks usually are not accurately 177 synchronized enough for real-time interactive applications. 178 Moreover, the current network synchronization protocols (e.g., [5]) 179 assume symmetric network paths and use the round-trip-delay as an 180 approximation. We call the delay measured by referencing 181 asynchronous clocks the "relative-delay". It is the real-time delay 182 measure with a clock-offset error in it. 184 In a best-effort service packet-switching internetwork, "optimal" is 185 quite a relative term. When a source/intermediate-node intercepts 186 only one RGM-flow from a given sink-node, the route of the incoming RGM- 187 flow is certainly the only and therefore the optimal route to forward 188 datagrams. In case when multiple RGM-flows incident with a source- 189 node are rooted at the same sink-node but via different routes, the 190 decision of the optimal route back to the sink-node can be reached by 191 simply comparing relative-delays of distinct paths. This is possible 192 because that incurred relative-delays result in the same amount of 193 the clock-offset for any path between a pair of nodes. 195 To measure the relative-delay of a path, each node in the Internet 196 that participates in the routing protocol is assumed to send periodic 197 delay-probe-packets to every possible adjacent node. Thus, the 198 receiving node is able to compute the directional One-Hop-Relative- 199 Delays (OHRDs) from (not to) its neighbors. It is required that each 200 node maintains an OHRD cache which has entries for each neighbor. No 201 further routine information exchange or propagation among routing 202 entities is required by the protocol. This ensures the algorithm 203 against from slow-convergence. RGM is designed to convey the "run- 204 time" relative-delay information across the internetwork. 206 Let d(i,i+1) represent the absolute-delay from node i to adjacent 207 node i+1. The OHRD d'(i,i+1) can be measured by sending a probe 208 IP datagram from i to i+1, whose departure time t(i) and arrival 209 time t(i+1) are time-stamped according to the clocks at i and i+1, 210 respectively. d'(i,i+1) actually consists of two components: 212 d'(i,i+1) = d(i,i+1) + o(i,i+1) 214 where o(i,i+1) denotes the clock-offset between i and i+1. 216 Obviously, a single trial of the delay measurement would be 217 insufficient to represent the statistics of the path. It is assumed 218 that the OHRD database maintained by each node is updated with 219 moving-averages of instantaneous measures. The measure interval and 220 the averaging window size are critical to both sensibility and 221 stability of the routing protocol. The basic criterion is that, OHRD 222 database is expected to reflect link statistics, while minimal 223 computation is most desirable. Further studies are required for 224 determination of those parameters. From now on, we shall refer 225 d'(i,i+1) as the measure of the mean OHRD from node i to its neighbor 226 i+1. It is recorded by node i+1, and available for reference at any 227 time instant. 229 When a RGM traverses along a multi-hop path from node j to node i, 230 the relative delay of the reverse path, denoted as D'(i,j), can be 231 calculated by adding the OHRD of every hop along the path. Among 232 other information, each RGM-datagram carries a Reverse-Relative-Delay 233 (RRD) parameter, which is the sum of the OHRDs added every time 234 before the RGM crossing the hop. Thus, the receiving node is able to 235 exam the reverse path by simply reading the value of the RRD field: 237 D'(i,j) = d'(j-1,j) + d'(j-2,j-1) + ... + d'(i-1,i) 238 = d(j-1,j) + d(j-2,j-1)+ ...+ d(i-1,i) + o(i,j) 240 where o(i,j) is the clock-offset between node i and node j. It is 241 equal to the sum of the clock-offset components of all OHRDs along 242 the path. Let c(i) represent the value of node-i's clock at a given 243 time instance, o(i,j) is always independent of the intermediate 244 clocks, and the path as well: 246 o(i,j) = c(i) - c(j) 247 = c(i) - c(i+1) + c(i+1) - c(i+2) + ... + c(j-1) - c(j) 248 = o(i,i+1) + o(i+1, i+2) + ... + o(j-1, j) 250 As shown in the example topology below, two RGM-flows originating at 251 node-A destinating at node-D via B and C. If D'(D,B,A) < D'(D,C,A), 252 the port to B is considered by node-D as the optimal route to forward 253 datagrams of certain multicast group to sink-node A, vice versa. 255 B---------D D'(D,B,A) = d'(B,A) + d'(D,B) 256 / / = d(B,A) + o(B,A) + d(D,B) + o(D,B) 257 / / = D(D,B,A) + o(D,A) 258 A---------C D'(D,C,A) = D(D,C,A) + o(D,A) 260 3.2. Forming the Sink-Rooted Spanning Trees 262 As mentioned in the foregoing, RGMs are used to search for multicast 263 source-node groups on behalf of multicast sink-node groups, and 264 to convey the information about the end-to-end RRD measures. The IP 265 header of a RGM datagram contains the sink-subnet-id as its Source 266 Address, and a well-known multicast address in the Destination 267 Address field particularly assigned to the routing protocol. The 268 initial Time-To-Live (TTL) parameter is set to a number greater than 269 1 and less than 255 to limit the sink-rooted tree to span within a 270 given scope. RGMs also carry other information which will be 271 discussed later. 273 RGMs are to be forwarded using the typical Reverse Path Broadcasting 274 (RPB) algorithm. It is assumed that each node that has more than one 275 port maintains a session database, called the "RGM-flow table", to 276 cache the information about every incidence RGM-flow. An entry is 277 created for a sink-node in the cache upon receiving the first RGM 278 from that node. Successive arrivals from the same sink-node will make 279 the entry stay effective. 281 For a node on the edge of the Internet, i.e., it has only one port 282 connecting to the outside world, it only has to record the union of 283 the multicast-group-lists of all incoming RGMs. The actual source 284 addresses of RGMs are not interested because the node ought to send 285 datagrams through the only port any way, if it is the source node of 286 the requested multicast group. 288 In case that an intermediate/source node has direct connections with 289 multiple adjacent nodes, it possibly intercepts multiple RGM-flows 290 sourced by the same sink-node but via different routes. Nevertheless 291 , only one RGM flow from a given sink-node is accepted and possibly 292 forwarded to form the segment of the sink-rooted spanning tree. In 293 the RGM-flow table, an incoming RGM-flow is represented by the 294 (SinkNode, InPort) pair. RRD is recorded for the respective RGM-flow 295 whenever a RGM arrives. Any other ports that are currently not being 296 contacted by the same sink-node are considered having RRD value of 297 infinite. 299 Suppose that node-A has three neighboring nodes, it receives RGMs 300 rooted at sink-subnet S (134.74.60.0) from port p0 and port p2, 301 respectively. According to the RRD measurement, the route via p0 is 302 the optimal reverse path back to node-S. The RGMs arriving at p0 are 303 to be forwarded through ports p1 and p2 to the downstream neighbors 304 (with respect to RGM-flows), whereas, those arriving at p1 are 305 discarded. The RGM-flow table of node-A should look like as follows: 307 SinkNode InPort RRD FlowTTL 308 -------------------------------------- 309 134.74.60.0 p0 12345 2 310 p1 inf / 311 p2 23456 3 313 As shown in the table, each (SinkNode, InPort) pair also has a Flow 314 Time-To-Live (FlowTTL) counter to monitor the continuity of the RGM- 315 flow. FlowTTLs will be decreased by one every unit time, and reset 316 after receiving a successive RGM. The RRD value of a sub-entry is 317 set to infinite if no more RGM arriving at the port and the FlowTTL 318 has expired. The entry for a given SinkNode is cleaned up only if all 319 sub-entries have infinite RRD values. It is mandatory that the 320 interval between any sink-node sending two consecutive RGMs along the 321 same outgoing path must have a protocol-specific up-bound, in order 322 to quantify the continuity of the RGM-flow. Since packet-loss and 323 delay-jitter are expected in the Internet, the duration of FlowTTL is 324 supposed to cover several such maximal RGM-intervals. 326 In summary, a source/intermediate node records the RRD information 327 for each incoming RGM-flow, and only forwards RGMs that arrive via 328 the optimal reverse path (with smallest RRD for a given sink-node) 329 to downstream neighbors through all ports except the incoming port. 330 Any RGM-flow will be terminated by the receiving node if the TTLs of 331 RGM-datagrams are exhausted, and/or the reverse path of RGMs is not 332 considered as the optimal reverse path back to their origins. 334 3.3. Forwarding Multicast Datagrams to Sink-Subnets 336 Once the sink-rooted trees are formed, delivery of datagrams seems 337 straight-forward. The Forwarding-Table of a source/intermediate node 338 is derived from its RGM-flow-table. It contains entries for each 339 multicast group that is interested by at least one active sink-node. 340 Each multicast group in the table is mapped to a set of ports that 341 represents the union of the optimal routes to every sink-node within 342 the group. Thus, the basic forwarding rule is that, any node that has 343 either local-generated or received datagrams will forward them to 344 selected downstream neighbors according to the MulticastGroup-to- 345 OutPorts map. However, multiple sink-rooted trees might conjoin each 346 other so that the aggregated spanning graph contains circuits. In 347 order to form the group-specific spanning tree, circuits must be 348 broken. 350 #----------------------------------------------------------------# 351 | | 352 | +++++p0 +++++ G-E G E | 353 | + G @--------@ E @------- | | \ | 354 | ++@++ p1+++++p0 \ A-B-D A-B-D | 355 | p1| \ | | | 356 | |p1 \ p0 F-C (A) F-C (B)(C)(F) | 357 | ++@++ p1+++++ p1++@++ | 358 | + A @--------@ B @--------@ D + G-E G-E G-E | 359 | +++++p0 ++@++p0 +++++ \ | \ | \ | 360 | p2| A-B-D A B-D A-B D | 361 | |p0 | | | | 362 | +++++p0 p1++@++ F-C (D) F-C (E) F-C (G) | 363 | + F @--------@ C + | 364 | +++++ +++++ the tree rooted at a given | 365 | sink-node | 366 | | 367 | (*) represents sink node. | 368 #----------------------------------------------------------------# 370 The above figure illustrates an example topology and a possible 371 set of trees rooted at every potential sink-node in the graph. 372 Suppose that (A,C,G) is a set of active sink-nodes of a given 373 multicast group 239.10.10.1, accordingly, the forwarding-table of 374 each node should contain information as follows: 376 MulticastGroup OutPorts SinkNodes 377 ----------------------------------- 378 node-A: 239.10.10.1 p0 C 379 p1 G 381 node-B: 239.10.10.1 p1 A,G 382 p2 C 384 node-C: 239.10.10.1 p0 A,G 386 node-D: 239.10.10.1 p1 A,C 388 node-E: 239.10.10.1 p0 C 389 p1 A,G 391 node-F: 239.10.10.1 p0 A,C,G 393 The problem arises if node-E serves as one of the sources of the 394 multicast group, while the branches of tree(A) and tree(C) intersect 395 at source-node E from different directions. According to the basic 396 forwarding rule, datagrams are to be multicasted out through both 397 port p0 and port p1 of node-E. Upon receiving datagrams from its 398 port p0, intermediate-node B is supposed to further forward them 399 via port p1 and port p2, since both routes are optimal from B's 400 view point to sink-nodes A/G and C, respectively. Similarly, node-A 401 considers its OutPort p0 as the optimal route to sink-node C. 402 Obviously, this is a case of "mutual confusion", in which both node-A 403 and node-B will forward and receive duplicated datagrams to and from 404 each other. 406 The basic cause of such error is that, the sink-rooted routing 407 algorithm works, so far, in a manner that is totally transparent to 408 the source address of the multicast datagram. The protocol has defined 409 the mechanisms to enable an intermediate-node to explicitly determine 410 the downstream optimal route to a given sink-node. However, the node 411 has no idea whether an upstream path is also optimal for the same 412 sink-node. It is evident that the "mutual confusion" problem exists 413 between a pair of sink/intermediate nodes on a circuit, only because 414 they are receiving datagrams from a particular source/intermediate 415 node that forks multiple data-flows by duplicating datagrams. In the 416 previous example, only node A and node B are confused when receiving 417 datagrams sourced by node E. Any node other than node-E will not 418 cause the same problem. 420 The simplest solution to the circuit-problem is that, any node that 421 receives identical datagrams from different neighbors always forward 422 the early copy of a datagram and discards the later one. In so doing, 423 any intermediate-node that has multiple OutPorts for a given 424 multicast group has to check the source address and the sequence 425 number of every incoming datagram. Furthermore, the node may send 426 messages to suggest its neighbors not to forward datagrams from a 427 particular source-node. The link that has such messages on both 428 directions must be the one causing the trouble. The problem is that, 429 in addition to the processing overhead and the potential "flip-flop" 430 situation, link-waste is inevitable. 432 Since any source/intermedia node knows exactly who are expecting the 433 data and binds them to corresponding OutPorts, a deterministic 434 solution is suggested: If a source/intermediate node "splits" the 435 data-flow, i.e., forwards a single incoming data-flow to more than 436 one OutPort, it is responsible for specifying the sink-information 437 regarding the departure-port in each outgoing datagram. The 438 information can be a list of sink-nodes either exclusive or inclusive 439 of the OutPort. Also, a receiving intermediate-node will check and 440 possibly re-specify the applicable sink information only if it is the 441 next splitting point of the data-flow. The proposed forwarding-algorithm 442 can be expressed as the following pseudo-code, in which inclusive 443 sink-specification is assumed. 445 while ( receiving datagram D(G) of group G from InPort(D) ) { 446 if ( G has entry in forwarding-table && TTL(D) != 0 ) { 447 outport[] = OutPorts(D) - InPort(D); 448 dup = number of entries in outport[]; 449 if ( dup == 0 ) 450 discard D; 451 else if ( dup == 1) { 452 TTL(D) = TTL(D) -1; 453 forward D through outport[0]; /* only active port */ 454 } 455 else { 456 for (i=0; i