idnits 2.17.1 draft-pullen-ipv4-rsvp-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-26) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == There are 4 instances of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 29 instances of lines with control characters in the document. ** The abstract seems to contain references ([4]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 22 has weird spacing: '...and its worki...' == Line 346 has weird spacing: '...for any group...' == Line 449 has weird spacing: '...equests the I...' == Line 508 has weird spacing: '... table which...' == Line 729 has weird spacing: '... PSB is found...' == (1 more instance...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (30 September 1998) is 9340 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: '0' is mentioned on line 450, but not defined -- Looks like a reference, but probably isn't: 'NGRPS-1' on line 450 == Unused Reference: '3' is defined on line 1347, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '4' ** Downref: Normative reference to an Historic RFC: RFC 1584 (ref. '5') == Outdated reference: A later version (-11) exists of draft-ietf-idmr-dvmrp-v3-06 ** Downref: Normative reference to an Informational draft: draft-ietf-idmr-dvmrp-v3 (ref. '6') Summary: 12 errors (**), 0 flaws (~~), 11 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET DRAFT M. Pullen 2 Expiration: 30 March 1999 George Mason University 3 R. Malghan 4 Hitachi Data Systems 5 L. Lavu 6 Bay Networks 7 G. Duan 8 Oracle 9 J. Ma 10 NewBridge 11 H. Nah 12 George Mason University 13 30 September 1998 15 A Simulation Model for IP Multicast with RSVP 16 18 Status of this Memo 20 This document is an Internet-Draft. Internet-Drafts are working 21 Documents of the Internet Engineering Task Force (IETF), its areas, 22 and its working groups. Note that other groups may also distribute 23 working documents as Internet-Drafts. 25 Internet-Drafts are draft documents valid for a maximum of six months 26 and may be updated, replaced, or obsoleted by other documents at any 27 time. It is inappropriate to use Internet-Drafts as reference 28 material or to cite them other than as "work in progress". 30 To learn the current status of any Internet-Draft, please check the 31 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 32 Directories on ftp.is.co.za (Africa), nic.nordu.net (Northern 33 Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au (Pacific 34 Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu (US West Coast). 36 Abstract 38 This document describes a detailed model of IPv4 multicast with RSVP 39 that has been developed using the OPNET simulation package [4], with 40 protocol procedures defined in the C language. The model was 41 developed to allow investigation of performance constraints on 42 routing but should have wide applicability in the Internet 43 multicast/resource reservation community. We are making this model 44 publicly available with the intention that it can be used to provide 45 expanded studies of resource-reserved multicasting. 47 Table of Contents 49 1. Background 3 50 2. The OPNET Simulation Environment 3 51 3. IP Multicast Model 3 52 3.1 Address Format 4 53 3.2 Network Layer 4 54 3.3 Node layer 6 55 4. RSVP Model 15 56 4.1 RSVP Application 15 57 4.2 RSVP on Routers 17 58 4.3 RSVP on Hosts 20 59 5. Multicast Routing Model Interface 21 60 5.1 Creation of multicast routing processor node 22 61 5.2 Interfacing processor nodes 22 62 5.3 Interrupt Generation 24 63 5.4 Modifications of modules in the process model 25 64 6. OSPF and MOSPF Models 25 65 6.1 Init 25 66 6.2 Idle 25 67 6.3 BCOspfLsa 25 68 6.4 BCMospfLsa 26 69 6.5 Arr 26 70 6.6 Hello_pks 27 71 6.7 Mospfspfcalc 27 72 6.8 Ospfspfcalc 27 73 6.9 UpstrNode 28 74 6.10 DABRA 29 75 7. DVMRP Model 29 76 7.1 Init 29 77 7.2 Idle 29 78 7.3 Probe_Send State 30 79 7.4 Report_Send 30 80 7.5 Prune _Send 30 81 7.6 Graft_send 30 82 7.7 Arr_Pkt 31 83 7.8 Route_Calc 31 84 7.9 Timer 32 85 8. Simulation performance 32 86 9. Future Work 32 87 10. References 33 88 1. Background 90 The successful deployment of IP multicasting [1] and its availability 91 in the Mbone has led to continuing increase in real-time multimedia 92 Internet applications. Because the Internet has traditionally 93 supported only a best-effort quality of service, there is 94 considerable interest to create mechanisms that will allow adequate 95 resources to be reserved in networks using the Internet protocol 96 suite, such that the quality of real-time traffic such as video, 97 voice, and distributed simulation can be sustained at specified 98 levels. The RSVP protocol [2] has been developed for this purpose 99 and is the subject of ongoing implementation efforts. Although the 100 developers of RSVP have used simulation in their design process, no 101 simulation of IPmc with RSVP has been generally available for 102 analysis of the performance and prediction of the behavior of these 103 protocols. The simulation model described here was developed to fill 104 this gap, and is explicitly intended to be made available to the IETF 105 community. 107 2. The OPNET Simulation Environment 109 The Optimized Network Engineering Tools (OPNET) is a commercial 110 simulation product of the MIL3 company of Arlington, VA. It employs 111 a Discrete Event Simulation approach that allows large numbers of 112 closely-spaced events in a sizable network to be represented 113 accurately and efficiently. OPNET uses a modeling approach where 114 networks are built of components interconnected by perfect links that 115 can be degraded at will. Each component's behavior is modeled as a 116 state-transition diagram. The process that takes place in each state 117 is described by a program in the C language. We believe this makes 118 the OPNET-based models relatively easy to port to other modeling 119 environments. This family of models is compatible with OPNET 3.5. 120 The following sections describe the state-transition models and 121 process code for the IPmc and RSVP models we have created using 122 OPNET. Please note that an OPNET �layer� is not necessarily 123 equivalent to a layer in a network stack, but shares with a stack 124 layer the property that it is a highly modular software element with 125 well defined interfaces. 127 3. IP Multicast Model 129 The following processing takes place in the indicated modules. Each 130 subsection below describes in detail a layer in the host and the 131 router that can be simulated with the help of the corresponding OPNET 132 network layer or node layer or the process layer, starting from 133 physical layer. 135 3.1 Address format 137 The OPNET IP model has only one type of addressing denoted by "X.Y" 138 where X is 24 bits long and Y is 8 bits long, corresponding to an 139 IPv4 Class C network. The X indicates the destination or the source 140 network number and Y indicates the destination or the source node 141 number. In our model X = 500 is reserved for multicast traffic. For 142 multicast traffic the value of Y indicates the group to which the 143 packet belongs. 145 3.2 Network Layer 147 Figure 1 describes an example network topology built using the OPNET 148 network editor. This network consists of two backbone routers BBR1, 149 BBR2, three area border routers ABR1, ABR2, ABR3 and six subnets F1, 150 through F6. As OPNET has no full duplex link model, each connecting 151 link is modeled as two simplex links enabling bidirectional traffic. 153 Figure 1: Network Layer of Debug Model 155 3.2.1 Attributes 157 The attributes of the elements of the network layer are: 159 a. Area Border Routers and Backbone Routers 161 1. IP address of each active interface of each router 162 (network_id.node_id) 164 2. Service rate of the IP layer (packets/sec) 165 3. Transmission speeds of each active interface (bits/sec) 167 b. Subnets 169 1. IP address of each active interface of the router in the subnet 170 2. IP address of the hosts in each of the subnet. 171 3. Service rate of the IP layer in the subnet router and the hosts. 173 c. Simplex links 175 1. Propagation delay in the links 176 2. The process model to be used for simulating the simplex links 177 (this means whether animation is included or not). 179 3.2.2 LAN Subnets 181 Figure 2 shows the FDDI ring as used in a subnet. The subnet will 182 have one router and one or more hosts. The router in the subnet is 183 included to route the traffic between the FDDI ring or Ethernet in 184 the corresponding subnet and the external network. The subnet router 185 is connected on one end to Ethernet or FDDI ring and normally also is 186 connected to an area border router on another interface (the area 187 border routers may be connected to more than one backbone router). 188 In the Ethernet all the hosts are connected to the bus, while in FDDI 189 the hosts are interconnected in a ring as illustrated in Figure 2. 191 Figure 2: FDDI Ring Subnet Layer 193 FDDI provides general purpose networking at 100 Mb/sec transmission 194 rates for large numbers of communicating stations configured in a 195 ring topology. Use of ring bandwidth is controlled through a timed 196 token rotation protocol, wherein stations must receive a token and 197 meet with a set of timing and priority criteria before transmitting 198 frames. In order to accommodate network applications in which 199 response times are critical, FDDI provides for deterministic 200 availability of ring bandwidth by defining a synchronous transmission 201 service. Asynchronous frame transmission requests dynamically share 202 the remaining ring bandwidth. 204 Ethernet is a bus-based local area network (LAN) technology. The 205 operation of the LAN is managed by a media access protocol (MAC) 206 following the IEEE 802.3 standard, providing Carrier Sense Multiple 207 Access with Collision Detection (CSMA/CD) for the LAN channel. 209 3.3 Node layer 211 This section discusses the internal structure of hosts and routers 212 with the help of node level illustrations built using the Node editor 213 of OPNET. 215 3.3.1 Basic OPNET elements 217 The basic elements of a node level illustration are 219 a. Processor nodes: Processor nodes are used for processing incoming 220 packets and generating packets with a specified packet format. 221 b. Queue node: Queue nodes are a superset of processor nodes. In 222 addition to the capabilities of processor nodes, queue nodes also 223 have capability to store packets in one or more queues. 225 c. Transmitter and Receiver nodes: Transmitters simulate the 226 link behavior effect of packet transmission and Receivers simulate 227 the receiving effects of packet reception. The transmission rate is 228 an attribute of the transmitter and receiving rate is an attribute 229 of the receiver. These values together decide the transmission delay 230 of a packet. 232 d. Packet streams: Packet streams are used to interconnect the above 233 described nodes. 235 e. Statistic streams: Statistic streams are used to convey 236 information between the different nodes: Processor, Queue, 237 Transmitters and Receivers nodes respectively. 239 3.3.2 Host description 241 The host model built using OPNET has a layered structure. Different 242 from the OPNET layers (Network, Node and Process layer) that describe 243 the network at different levels, protocol stack elements are 244 implemented at OPNET nodes. Figure 3 shows the node level structure 245 of a FDDI host. 247 Figure 3: Node Level of Host 249 a. MAC queue node: The MAC interfaces on one side to the physical 250 layer through the transmitter (phy_tx) and receiver (phy_rx) and also 251 provides services to the IP layer. Use of ring bandwidth is 252 controlled through a timed token rotation protocol, wherein hosts 253 must receive a token and meet with a set of timing and priority 254 criteria before transmitting frames. When a frame arrives at the MAC 255 node, the node performs one of the following actions: 257 1. If the owner of the frame is this MAC, the MAC layer destroys 258 the frame since the frame has finished circulating through the 259 FDDI ring. 260 2. if the frame is destined for this host, the MAC layer makes a 261 copy of the frame, decapsulates the frame and sends the 262 descapsulated frame (packet) to the IP layer. The original 263 frame is transmitted to the next host in the FDDI ring 265 3. if the owner of the frame is any other host and the frame is not 266 destined for this host, the frame is forwarded to the adjacent 267 host. 269 b. ADDR_TRANS processor node: The next layer above the MAC layer is 270 the addr_trans processor node. This layer provides service to the IP 271 layer by carrying out the function of translating the IP address to 272 physical interface address. This layer accepts packets from the IP 273 layer with the next node information, maps the next node information 274 to a physical address and forwards the packet for transmission. This 275 service is required only in one direction from the IP layer to the 276 MAC layer. Since queuing is not done at this level, a processor node 277 is used to accomplish the address translation function, from IP to 278 MAC address (ARP). 280 c. IP queue node: Network routing/forwarding in the hierarchy is 281 implemented here. IP layer provides service for the layers above 282 which are the different higher level protocols by utilizing the 283 services provided by the MAC layer. For packets arriving from the 284 MAC layer, the IP layer decapsulates the packet and forwards the 285 information to an upper layer protocol based upon the value of the 286 protocol ID in the IP header. For packets arriving from upper layer 287 protocols, the IP layer obtains the destination address, calculates 288 the next node address from the routing table, encapsulates it with a 289 IP header and forwards the packet to the addr_trans node with the 290 next node information. 292 The IP node is a queue node. It is in this layer that packets incur 293 delay which simulates the processing capability of a host and 294 queueing for use of the outgoing link. A packet arrival to the IP 295 layer will be queued and experience delay when it finds another 296 packet already being transmitted, plus possibly other packets queued 297 for transmission. The packets arriving at the IP layer are queued 298 and operate with a first-in first-out (FIFO) discipline. The queue 299 size, service rate of the IP layer are both promoted attributes, 300 specified at the simulation run level by the environment file. 302 d. IGMP processor node: The models described above are standard 303 components available in OPNET libraries. We have added to these the 304 host multicast protocol model IGMP_host, the router multicast model 305 IGMP_gwy, and the unicast best-effort protocol model UBE. 307 The IGMP_host node (Figure 4) is a process node. Packets are not 308 queued in this layer. IGMP_host provides unique group management 309 services for the multicast applications utilizing the services 310 provided by the IP layer. IGMP_host maintains a single table which 311 consists of group membership information of the application above the 312 IGMP layer. The function performed by the IGMP_host layer depends 313 upon the type of the packet received and the source of the packet. 315 Figure 4: IGMP process on hosts 317 The IGMP_host layer expects certain type of packets from the 318 application layer and from the network: 320 1. Accept join group requests from the application layer (which can 321 be one or more applications): IGMP_host maintains a table which 322 consists of the membership information for each group. When a 323 application sends a join request, it requests to join a specific 324 group N. The membership information is updated. This new group 325 membership information has to be conveyed to the nearest router 326 and to the MAC layer. If the IGMP_host is already a member ofthis 327 group (i.e. if another application above the IGMP_host is a 328 member of the group N), the IGMP_host does not have to send a 329 message to the router or indicate to the MAC layer. If the 330 IGMP_host is not a member currently, the IGMP_host generates a 331 join request for the group N (this is called a "response" in RFC 332 1112) and forwards it to the IP layer to be sent to the nearest 333 router. In addition the IGMP_host also conveys this membership 334 information to the MAC layer interfacing to the physical layer 335 through the OPNET "statistic wire" connected from the IGMP_host to 336 the MAC layer, so that the MAC layer knows the membership 337 information immediately and begins to accept the frames destined 338 for the group N. (An OPNET statistic wire is a virtual path to 339 send information between OPNET models.) 340 2. Accept queries arriving from the nearest router and send responses 341 based on the membership information in the multicast table at the 342 IGMP_host layer: A query is a message from a router inquiring 343 each host on the router's interface about group membership 344 information. When the IGMP_host receives a query, it looks up the 345 multicast group membership table, to determine if any of the 346 host's applications are registered for any group. If any 347 registration exists, the IGMP_host schedules an event to generate 348 a response after a random amount of time corresponding to each 349 active group. The Ethernet example in Figure 5 and the 350 description in the following section describes the scenario. 352 --------------------------------------- 353 | | | | 354 | | | | 355 +---+ +---+ +---+ +---+ 356 | H1| | H2| | H3| | R | 357 +---+ +---+ +---+ +---+ 359 Figure 5: An Ethernet example of IGMP response schedule 361 The router R interfaces with the subnet on one interface I1 and 362 to reach the hosts. To illustrate this let us assume that hosts 363 H1 and H3 are members of group N1 and H2 is a member of group N2. 364 When the router sends a query, all the hosts receive the query at 365 the same time t0. IGMP_host in H1 schedules an event to generate 366 a response at a randomly generated time t1 (t1 >= t0) which will 367 indicate the host H1 is a member of group N1. Similarly H2 will 368 schedule an event to generate a response at t2 (t2 >= t0)to 369 indicate membership in group N2 and H3 at t3 (t3 >= t0) to 370 indicate membership in group N3. When the responses are 371 generated, the responses are sent with destination address set 372 to the multicast group address. Thus all member hosts of a group 373 will receive the responses sent by the other hosts in the subnet 374 who are members of the same group. 376 In the above example if t1 < t3, IGMP_host in H1 will generate a 377 response to update the membership in group N1 before H3 does and 378 H3 will also receive this response in addition to the router. 379 When IGMP_host in H3 receives the response sent by H1, IGMP_host 380 in H3 cancels the event scheduled at time t3, since a response for 381 that group has been sent to the router. To make this work, the 382 events to generate response to queries are scheduled randomly, and 383 the interval for scheduling the above described event is forced to 384 be less than the interval at which router sends the queries. 385 3. Accept responses sent by the other hosts in the subnet if any 386 application layer is a member of the group to which the packet is 387 destined. 388 4. Accept terminate group requests from the Application layer. These 389 requests are generated by application layer when a application 390 decides to leave a group. The IGMP_host updates the group 391 information table and subsequently will not send any response 392 corresponding to this group (unless another application is a 393 member of this group). When a router does not receive any 394 response for a group in certain amount of time on a specific 395 interface, membership of that interface is canceled in that group. 397 e. Unicast best-effort (UBE) processor node: This node is used to 398 generate a best effort traffic in the Internet based on the User 399 Datagram Protocol (UDP). The objective of this node is to model 400 the background traffic in a network. This traffic does not use the 401 services provided by RSVP. UBE node aims to create the behaviors 402 observed in a network which has one type of application using the 403 services provided by RSVP to achieve specific levels of QoS and 404 the best effort traffic which uses the services provided by only the 405 underlying IP. 407 The UBE node generates traffic to a randomly generated IP address 408 so as to model competing traffic in the network from applications 409 such as FTP. The packets generated are sent to the IP layer which 410 routes the packet based upon the information in the routing table. 411 The attributes of the UBE node are: 413 1. Session InterArrival Time (IAT): is the variable used to schedule 414 an event to begin a session. The UBE node generates an 415 exponentially distributed random variable with mean Session IAT 416 and begins to generate data traffic at that time. 417 2. Data IAT: When the UBE generates data traffic, the interarrival 418 times between data packets is Data IAT. A decrease in the value of 419 Data IAT increases the severity of congestion in the network. 420 3. Session-min and Session-max: When the UBE node starts generating 421 data traffic it remains in that session for a random period which 422 is uniformly distributed between Session-min and Session-max. 424 f. Multicast Application processor node: The application layer 425 consists of one or more application nodes which are process nodes. 426 These nodes use the services provided by lower layer protocols IGMP, 427 RSVP and IP. The Application layer models the requests and traffic 428 generated by Application layer programs. Attributes of the 429 application layer are: 431 1. Session IAT: is the variable used to schedule an event to begin a 432 session. The Application node generates an exponentially 433 distributed random variable with mean Session IAT and begins to 434 generate information for a specific group at that time and also 435 accept packets belonging to that group. 436 2. Data IAT: When Application node generates data traffic, the inter 437 arrival time between the packets uses Data IAT variable as the 438 argument. The distribution can be any of the available 439 distribution functions in OPNET. 440 3. Session-min and Session-max: When an application joins a session 441 the duration for which the application stays in that session is 442 bounded by Session-min and Session-max. A uniformly distributed 443 random variable between Session-min and Session-max is generated 444 for this purpose. At any given time each node will have zero or 445 one flow(s) of data. 447 4. NGRPS: This variable is used by the application generating 448 multicast traffic to bound the value of the group to which an 449 application requests the IGMP to join. The group is selected at 450 random from the range [0,NGRPS-1]. 452 Figure 6: Node Level of Gateway 454 3.3.3 Router description 456 There are two types of routers in the model, a router serving a 457 subnet and a backbone router. A subnet router has all the functions 458 of a backbone router and in addition also has a interface to the 459 underlying subnet which can be either a FDDI network or a Ethernet 460 subnet. In the following section the subnet router will be discussed 461 in detail. 463 Figure 6 shows the node level model of a subnet router. 465 a. The queueing technique implemented in the router is a combination 466 of input and output queueing. The nodes rx1 to rx10 are 467 the receivers connected to incoming links. The router in Figure 6 468 has a physical interface to the FDDI ring or Ethernet, which 469 consists of the queue node MAC, transmitter phy_tx, and the receiver 470 phy_rx. The backbone routers will not have a MAC layer. The 471 services provided and the functions of the MAC layer are the same as 472 the MAC layer in the host discussed above. 474 There is one major difference between the MAC node in a subnet 475 router and that in a host. The MAC node in a subnet router accepts 476 all arriving multicast packets unlike the MAC in a host which accepts 477 only the multicast packets for groups of which the host is a member. 478 For this reason the statistic wire from the IGMP to MAC layer does 479 not exist in a router (also because a subnet router does not have an 480 application layer). 482 b. Addr_trans: The link layer in the router hierarchy is the 483 addr_trans processor node which provides the service of translating 484 the IP address to a physical address. The addr_trans node was 485 described above under the host model. 487 c. IP layer: The router IP layer which provides services to the upper 488 layer transport protocols and also performs routing based upon the 489 information in the routing table. The IP layer maintains two routing 490 tables and one group membership table. 492 The tables used by the router model are: 494 1. Unicast routing table: This table is an single array of one 495 dimension, which is used to route packets generated by the UDP 496 process node in the hosts. If no route is known to a particular 497 IP address, the corresponding entry is set to a default route. 498 2. Multicast routing table: This table is a N by I array where N is 499 the maximum number of multicast groups in the model and I is the 500 number of interfaces in the router. This table is used to route 501 multicast packets. The routing table in a router is set by an 502 upper layer routing protocol (see section 4 below). When the IP 503 layer receives a multicast packet with a session_id corresponding 504 to a session which is utilizing the MOSFP, it looks up the 505 multicast routing table to obtain the next hop. 506 3. Group membership table: This table is used to maintain group 507 membership information of all the interfaces of the router. This 508 table which is also an N by I array is set by the IGMP layer 509 protocol. The routing protocols use this information in the group 510 membership table to calculate and set the routes in the Multicast 511 routing table. 513 Sub-queues: The IP node has three subqueues, which implement 514 queuing based upon the priority of arriving packets from the 515 neighboring routers or the underlying subnet. The queue with index 0 516 has the highest priority. When a packet arrives at the IP node, the 517 packets are inserted into the appropriate sub-queue based on the 518 priority of their traffic category: control traffic, resource- 519 reserved traffic, or best effort traffic. A non-preemptive priority 520 is used in servicing the packets. After the servicing, packets are 521 sent to the one of the output queues or the MAC. The packets progress 522 through these queues until the transmitter becomes available. 524 Attributes of the IP node are: 526 1. Unique IP address for each interface (a set of transmitter and 527 receiver constitute an interface). 528 2. Service rate: the rate with which packets are serviced at the 529 router. 530 3. Queue size: size of each of the sub queues used to store incoming 531 packets based on the priority can be specified individually 533 d. Output queues: The output queues perform the function of queueing 534 the packets received by the IP layer when the transmitter is busy. 535 A significant amount of queuing takes place in the output queues only 536 if the throughput of the IP node approaches the transmission capacity 537 of the links. The only attribute of the queue node is: 539 Queue size: size of the queue in each queue node. If the queue is 540 full when a packet is received, that packet is dropped. 542 e. IGMP Node: Also modeled in the router is the IGMP for implementing 543 multicasting, the routing protocol, and RSVP for providing specific 544 QoS setup. 546 The IGMP node implements the IGMP protocol as defined in RFC 1112. 547 The IGMP node at a router (Figure 7) is different from the one at a 548 host. The functions of the IGMP node at a router are: 550 1. IGMP node at a router sends queries at regular intervals on all 551 its interfaces. 552 2. When IGMP receives a response to the queries sent, IGMP updates 553 the multicast Group membership table in the IP node and triggers 554 on MOSPF LSA update. 555 3. Every time the IGMP sends a query, it also updates the multicast 556 group membership table in the IP node if no response has been 557 received on for the group on any interface, indicating that a 558 interface is no longer a member of that group. This update is 559 done only on entries which indicate an active membership for a 560 group on a interface where the router has not received a response 561 for the last query sent. 562 4. The routing protocol (see ection 4 below) uses the information in 563 the group membership table to calculate the routes and update the 564 multicast routing table. 565 5. When the IGMP receives a query (an IGMP at router can receive a 566 query from a directly connected neighboring router), the IGMP node 567 creates a response for each of the groups it is a member of on all 568 the interfaces except the one through which the query was 569 received. 570 6. The IGMP node on a backbone router is disabled, because IGMP is 571 only used when a router has hosts on its subnet. 573 Figure 7: IGMP process on routers 575 4. RSVP model 577 The current version of the RSVP model supports only fixed-filter 578 reservation style. The following processing takes place in the 579 indicated modules. The model is current with [2]. 581 4.1 RSVP APPLICATION 583 4.1.1 Init 585 Initializes all variables and loads the distribution functions 586 for Multicast Group IDs, Data, termination of the session. Transit 587 to Idle state after completing all the initializations. 589 4.1.2 Idle 591 This state has transitions to two states, Join and Data_Send. It 592 transit to Join state at the time that the application is scheduled 593 to join a session or terminate the current session, transit to 594 Data_Send state when the application is going to send data. 596 4.1.3 Join 598 The Application will send a session call to local RSVP daemon. In 599 response it receives the session Id from the Local daemon. This makes 600 a sender or receiver call. The multicast group id is selected 601 randomly from a uniform distribution. While doing a sender call the 602 application will write all its sender information in a global session 603 directory. 605 If the application is acting as a receiver it will check for the 606 sender information in the session directory for the multicast group 607 that it wants to join to and make a receive call to the local RSVP 608 daemon. Along with the session and receive calls, it makes an IGMP 609 join call. 611 If the application chooses to terminate the session to which it was 612 registered, it will send a release call to the local RSVP daemon and 613 a terminate call to IGMP daemon. After completing these functions it 614 will return to the idle state. 616 Figure 8: RSVP process on routers 618 4.1.4 Data_Send 620 Creates a data packet and sends it to a multicast destination that it 621 selects. It update a counter to keep track of how many packets that 622 it has sent. This state on default returns to Idle state. 624 4.2 RSVP on Routers 626 Figure 8 shows the process model of RSVP on routers. 628 4.2.1 Init 630 This state calls a function called RouterInitialize which will 631 initialize all the router variables. This state will go to Idle state 632 after completing these functions. 634 4.2.2 Idle 636 Idle state transit to Arr state upon receiving a packet. 638 4.2.3 Arr 640 This state checks for the type of the packet arrived and calls the 641 appropriate function depending on the type of message received. 643 a. PathMsgPro: This function was invoked by the Arr state when a path 644 message is received. Before it was called, OSPF routing had been 645 recomputed to get the latest routing table for forwarding the Path 646 Message. 648 1. It first checks for a Path state block which has a matching 649 destination address and if the sender port or sender address or 650 destination port does not match the values of the Session object 651 of the Path state block, it sends an path error message and 652 returns. (At present the application does not send any error 653 messages, we print this error message on the console.) 654 2. If a PSB is found whose Session Object and Sender Template Object 655 matches with that of the path message received, the current PSB 656 becomes the forwarding PSB. 657 3. Search for the PSB whose session and sender template matches 658 the corresponding objects in the path message and whose incoming 659 interface matches the IncInterface. If such a PSB is found and the 660 if the Previous Hop Address, Next Hop Address, and SenderTspec 661 Object doesn't match that of path message then the values of path 662 message is copied into the path state block and Path Refresh 663 Needed flag is turned on. If the Previous Hop Address, Next Hop 664 Address of PSB differs from the path message then the Resv Refresh 665 Needed flag is also turned on, and the Current PSB is made equal 666 to this PSB. 668 4. If a matching PSB is not found then a new PSB is created and 669 and Path Refresh Needed Flag is turned on, and the Current PSB 670 is made equal to this PSB. 671 5. If Path Refresh Needed Flag is on, Current PSB is copied into 672 forwarding PSB and Path Refresh Sequence is executed. To execute 673 this function called PathRefresh is used. Path Refresh is sent to 674 every interface that is in the outgoing interfaces list of 675 forwarding path state block. 676 6. Search for a Reservation State Block whose filter spec object 677 matches with the Sender Template Object of the forwarding PSB and 678 whose Outgoing Interface matches one of the entry in the 679 forwarding PSB's outgoing interface list. If found then a Resv 680 Refresh message to the Previous Hop Address in the forwarding PSB 681 and execute the Update Traffic Control sequence. 683 b. PathRefresh: This function is called from PathMsgPro. It creates 684 the Path message sends the message through the outgoing interface 685 that is specified by the PathMsgPro. 687 c. ResvMsgPro: This function was invoked by the Arr state when a Resv 688 message is received. 689 1. Determine the outgoing interface and check for the PSB whose 690 Source Address and Session Objects match the ones in the Resv 691 message. 692 2. If such a PSB is not found then send a ResvErr message saying that 693 No Path Information is available. (We have not implemented this 694 message, we only print an error message on the console.) 695 3. Check for incompatible styles and process the flow descriptor list 696 to make reservations, checking the PSB list for the sender 697 information. If no sender information is available through the PSB 698 list then send an Error message saying that No Sender information. 699 For all the matching PSBs found, if the Refresh PHOP list doesn't 700 have the Previous Hop Address of the PSB then add the Previous Hop 701 Address to the Refresh PHOP list. 702 4. Check for matching Reservation State Block (RSB) whose Session 703 and Filter Spec Object matches that of Resv message. If no such 704 RSB is found then create a new RSB from the Resv Message and set 705 the NeworMod flag On. Call this RSB as activeRSB. Turn on the Resv 706 Refresh Needed Flag. 707 5. If a matching RSB is found, call this as activeRSB and if the 708 FlowSpec and Scope objects of this RSB differ from that of Resv 709 Message copy the Resv message Flowspec and Scope objects to the 710 ActiveRSB and set the NeworMod flag On. 711 6. Call the Update Traffic Control Sequence. This is done by 712 calling the function UpdateTrafficControl 713 7. If Resv Refresh Needed Flag is On then send a ResvRefresh 714 message for each Previous Hop in the Refresh PHOP List. This is 715 done by calling the ResvRefresh function for every Previous Hop in 716 the Refresh PHOP List. 718 d. ResvRefresh: this function is called by both PathMsgPro and 719 ResvMsgPro with RSB and Previous Hop as input. The function 720 constructs the Resv Message from the RSB and sends the message to the 721 Previous Hop. 723 e. PathTearPro: This function is invoked by the Arr state when a 724 PathTear message is received. 726 1. Search for PSB whose Session Object and Sender Template Object 727 matches that of the arrived PathTear message. 728 2. If such a PSB is not found do nothing and return. 729 3. If a matching PSB is found, a PathTear message is sent to all the 730 outgoing interfaces that are listed in the Outgoing Interface list 731 of the PSB. 732 4. Search for all the RSB whose Filter Spec Object matches the Sender 733 Template Object of the PSB and if the Outgoing Interface of this 734 RSB is listed in the PSB's Outgoing interface list delete the RSB. 735 5. Delete the PSB and return. 737 f. ResvTearPro: This function is invoked by the Arr state when a 738 ResvTear message is received. 739 1. Determine the Outgoing Interface. 740 2. Process the flow descriptor list of the arrived ResvTear message. 741 3. Check for the RSB whose Session Object, Filter Spec Object matches 742 that of ResvTear message and if there is no such RSB return. 743 4. If such an RSB is found and Resv Refresh Needed Flag is on send 744 ResvTear message to all the Previous Hops that are in Refresh PHOP 745 List. 746 5. Finally delete the RSB. 748 g. ResvConfPro: This function is invoked by the Arr state when a 749 ResvConf message is received. The Resv Confirm is forwarded to the IP 750 address that was in the Resv Confirm Object of the received ResvConf 751 message. 753 h. UpdateTrafficControl: This function is called by PathMsgPro and 754 ResvMsgPro and input to this function is RSB. 755 1. The RSB list is searched for a matching RSB that matches the 756 Session Object, and Filter Spec Object with the input RSB. 757 2. Effective Kernel TC_Flowspec are computed for all these RSB's. 758 3. If the Filter Spec Object of the RSB doesn't match the one of the 759 Filter Spec Object in the TC Filter Spec List then add the Filter 760 Spec Object to the TC Filter Spec List. 761 4. If the FlowSpec Object of the input RSB is greater than the 762 TC_Flowspec then turn on the Is_Biggest flag. 763 5. Search for the matching Traffic Control State Block(TCSB) whose 764 Session Object, Outgoing Interface, and Filter Spec Object matches 765 with those of the Input RSB. 766 6. If such a TCSB is not found create a new TCSB. 767 7. If matching TCSB is found modify the reservations. 769 8. If Is_Biggest flag is on turn on the Resv Refresh Needed Flag 770 flag, else send a ResvConf Message to the IP address in the 771 ResvConfirm Object of the input RSB. 773 4.2.4 pathmsg: The functions to be done by this state are done through 774 the function call PathMsgPro described above. 776 4.2.5 resvmsg: The functions that would be done by this state are done 777 through the function call ResvMsgPro described above. 779 4.2.6 ptearmsg: The functions that would be done by this state are done 780 through the function call PathTearPro described above. 782 4.2.7 rtearmsg: The functions that would be done by this state are done 783 through the function call ResvTearPro described above. 785 4.2.8 rconfmsg: The functions that would be done by this state are done 786 through the function call ResvConfPro described above. 788 4.3 RSVP on Hosts 790 Figure 9 shows the process of RSVP on hosts. 792 4.3.1 Init 794 Initializes all the variables. Default transition to idle state. 796 Figure 9: RSVP process on hosts 798 4.3.2 idle 800 This state transit to the Arr state on packet arrival. 802 4.3.3 Arr 804 This state calls the appropriate functions depending on the 805 type of message received. Default transition to idle state. 807 a. MakeSessionCall: This function is called from the Arr state 808 whenever a Session call is received from the local application. 810 1. Search for the Session Information. 811 2. If one is found return the corresponding Session Id. 812 3. If the session information is not found assign a new session Id to 813 the session to the corresponding session. 814 4. Make an UpCall to the local application with this Session Id. 816 b. MakeSenderCall: This function is called from the Arr state 817 whenever a Sender call is received from the local application. 819 1. Get the information corresponding to the Session Id and create a 820 Path message corresponding to this session. 821 2. A copy of the packet is buffered and used by the host to send the 822 PATH message periodically. 823 3. This packet is sent to the IP layer. 825 c. MakeReserveCall: This function is called from the Arr state 826 whenever a Reserve call is received from the local application. This 827 function will create and send a Resv message. Also, the packet is 828 buffered for later use. 830 d. MakeReleaseCall: This function is called from the Arr state 831 whenever a Release call is received from the local application. This 832 function will generate a PathTear message if the local application is 833 sender or generates a ResvTear message if the local application is 834 receiver. 836 4.3.4 Session 838 This state's function is performed by the MakeSessionCall function. 840 4.3.5 Sender 842 This state's function is han by the MakeSenderCall function. 844 4.3.6 Reserve 846 This state's function is performed by the MakeReserveCall function. 848 4.3.7 Release 850 This state's function is performed by the MakeReleaseCall function. 852 5. Multicast Routing Model Interface 854 Because this set of models was intended particularly to enable 855 evaluation by simulation of various multicast routing protocols, we 856 give particular attention in this section to the steps necessary to 857 interface a routing protocol model to the other models. We have 858 available implementations of DVMRP and OSPF, which we will describe 859 below. Instructions for invoking these models are contained in a 860 separate User's Guide for the models. 862 5.1 Creation of multicast routing processor node 864 Interfacing a multicast routing protocol using the OPNET Simulation 865 package requires the creation of a new routing processor node in the 866 node editor and linking it via packet streams. Packet streams are 867 unidirectional links used to interconnect processor nodes, queue 868 nodes, transmitters and receiver nodes. A duplex connection between 869 two nodes is represented by using two unidirectional links to connect 870 the two nodes to and from each other. 872 A multicast routing processor node is created in the node editor and 873 links are created to and from the processors(duplex connection) that 874 interact with this module, the IGMP processor node and the IP 875 processor node. Within the node editor, a new processor node can be 876 created by selecting the button for processor creation (plain gray 877 node on the node editor control panel) and by clicking on the desired 878 location in the node editor to place the node. Upon creation of the 879 processor node, the name of the processor can be specified by right 880 clicking on the mouse button and entering the name value on the 881 attribute box presented. Links to and from this node are generated 882 by selecting the packet stream button (represented by two gray nodes 883 connected with a solid green arrow on the node editor control panel), 884 left clicking on the mouse button to specify the source of the link 885 and right clicking on the mouse button to mark the destination of the 886 link. 888 5.2 Interfacing processor nodes 890 The multicast routing processor node is linked to the IP processor 891 node and the IGMP processor node each with a duplex connection. A 892 duplex connection between two nodes is represented by two uni- 893 directional links interconnecting them providing a bidirectional flow 894 of information or interrupts, as shown in Figure 6. The IP processor 895 node (in the subnet router) interfaces with the multicast routing 896 processor node, the unicast routing processor node, the Resource 897 Reservation processor node(RSVP), the ARP processor node( only on 898 subnet routers and hosts), the IGMP processor node, and finally the 899 MAC processor node (only on subnet routers and hosts) each with 900 a duplex connection with exceptions for ARP and MAC nodes. 902 5.2.1 Interfacing ARP and MAC processor nodes 904 The service of the ARP node is required only in the direction from 905 the IP layer to the MAC layer(requiring only a unidirectional link 906 from IP processor node to ARP processor node). The MAC processor 907 node on the subnet router receives multicast packets destined for all 908 multicast groups in the subnet, in contrast to the MAC node on subnet 909 hosts which only receives multicast packets destined specifically for 910 its multicast group. The MAC node connects to the IP processor node 911 with a single uni-directional link from it to the IP node. 913 5.2.2 Interfacing IGMP, IP, and multicast routing processor nodes 915 The IGMP processor node interacts with the multicast routing 916 processor node, unicast routing processor node, and the IP processor 917 node. Because the IGMP node is linked to the IP node, it is thus able 918 to update the group membership table(in this model, the group 919 membership table is represented by the local interface(interface 0) 920 of the multicast routing table data structure) within the IP node. 921 This update triggers a signal to the multicast routing processor node 922 from the IGMP node causing it to reassess the multicast routing table 923 within the IP node. If the change in the group membership table 924 warrants a modification of the multicast routing table, the multicast 925 routing processor node interacts with the IP node to modify the 926 current multicast routing table according to the new group membership 927 information updated by IGMP. 929 5.2.2.1 Modification of group membership table 931 The change in the group membership occurs with the decision at a host 932 to leave or join a particular multicast group. The IGMP process on 933 the gateway periodically sends out queries to the IGMP processes on 934 hosts within the subnet in an attempt to determine which hosts 935 currently are receiving packets from particular groups. Not 936 receiving a response for a pending IGMP host query specific to a 937 group indicates to the gateway IGMP that no host belonging to the 938 particular group exists in the subnet. This occurs when the last 939 remaining member of a multicast group in the subnet leaves. In this 940 case the IGMP processor node updates the group membership able and 941 triggers a modification of the multicast routing table by alerting 942 the multicast routing processor node. A prune message specific to 943 the group is initiated and propagated upward establishing a prune 944 state for the interface leading to the present subnet, effectively 945 removing this subnet from the group-specific multicast spanning tree 946 and potentially leading to additional pruning of spanning tree edges 947 as the prune message travels higher up the tree. Joining a multicast 948 group is also managed by the IGMP process which updates the group 949 membership table leading to a possible modification of the multicast 950 routing table. 952 5.2.2.2 Dependency on unicast routing protocol 954 The multicast routing protocol is dependent on a unicast routing 955 protocol (RIP or OSPF) to handle multicast routing. The next hop 956 interface to the source of the packet received, or the upstream 957 interface, is determined using the unicast routing protocol to trace 958 the reverse path back to the source of the packet. If the packet 959 received arrived on this upstream interface, then the packet can be 960 propagated downstream through its downstream interfaces (excluding 961 the interface in which the packet was received). Otherwise, the 962 packet is deemed to be a duplicate and dropped, halting the 963 propagation of the packet downstream. This repeated reverse path 964 checking and broadcasting eventually generates the spanning tree for 965 multicast routing of packets. To determine the reverse path forward 966 interface of a received multicast packet propagated up from the IP 967 layer, the multicast routing processor node retrieves a copy of the 968 unicast routing table from the IP processor node and uses it to 969 recalculate the multicast routing table in the IP processor node. 971 5.3 Interrupt Generation 973 Using the OPNET tools, interrupts to the multicast routing processor 974 node are generated in several ways. One is the arrival of a 975 multicast packet along a packet stream (at the multicast routing 976 processor node) when the packet is received by the MAC node and 977 propagated up the IP node where upon discarding the IP header 978 determination is made as to which upper layer protocol to send the 979 packet. A second type of interrupt generation occurs by remote 980 interrupts from the IGMP process alerting the multicast routing 981 process of an update in the group membership table. A third occurs 982 when the specific source/group (S,G) entry for a multicast packet 983 received at the IP node does not exist in the current multicast 984 routing table and a new entry needs to be created. The IP node 985 generates an interrupt to the multicast routing processor node 986 informing it to create a new source/group entry on the multicast 987 routing table. 989 5.3.1 Types of interrupts 991 The process interrupts generated within the OPNET model can be 992 handled by specifying the types of interrupts and the conditions for 993 the interrupts using the interrupt code, integer number representing 994 the condition for a specific interrupt. The conditions for 995 interrupts are specified on the interrupt stream linking the 996 interrupt generating state and the state resulting from the 997 interrupt. For self-interrupts (interrupts occurring among states 998 within the same process), interrupts of type OPC_INTRPT_SELF are 999 used. For remote interrupts (interprocess interrupts), the 1000 conditions for specific interrupts are specified from the idle state 1001 to the state resulting from the interrupt within the remote process. 1003 The remote interrupts are of type, OPC_INTRPT_REMOTE. A third type 1004 of interrupt is the OPC_INTRPT_STRM, which is triggered when packets 1005 arrive via a packet stream, indicating its arrival. The condition of 1006 this interrupt is also specified from the idle state to the resultant 1007 state by the interrupt condition stream defined by a unique interrupt 1008 code. For all of these interrupts, the interrupt code is provided 1009 within the header block (written in C language) of the interrupted 1010 process. When the condition for the interrupt becomes true, a 1011 transition is made to the resultant state specified by the interrupt 1012 stream. 1014 5.3.2 Conditions for interrupts 1016 Several interrupt connections exist to interface the IGMP processor 1017 node, IP processor node , and the multicast routing processor node 1018 with each other in the present OPNET Simulation Model. Also, the IP 1019 processor node interfaces with the unicast routing protocol which 1020 interfaces with the IGMP processor node. An OPC_INTRPT_STRM 1021 interrupt is generated when a multicast packet arrives via a packet 1022 stream from the IP processor node to the multicast routing processor 1023 node. A remote interrupt of type, OPC_INTRPT_REMOTE, is generated 1024 from the IGMP process to the IP process when a member of a group 1025 relinquishes membership from a particular group or a new member is 1026 added to a group. This new membership is updated in the group 1027 membership table located in the IP node by the IGMP process which 1028 also generates a remote interrupt to the multicast routing protocol 1029 process, causing a recalculation of the multicast routing table in 1030 the IP module. 1032 5.4 Modifications of modules in the process model 1034 Modifications of routing protocol modules (in fact all of the modules 1035 in the process model) are made transparently throughout the network 1036 using the OPNET Simulation tools. An addition or modification of a 1037 routing module in any subnet will reflect on all the subnets. 1039 6. OSPF and MOSPF Models 1041 OSPF and MOSPF models [5] are implemented in the OSPF model 1042 containing fourteen states. They only exist on routers. Figure 10 1043 shows the process model. The following processing takes place in the 1044 indicated modules. 1046 6.1 init 1048 This state initializes all the router variables. Default 1049 transition to idle state. 1051 6.2 idle 1053 This state has several transitions. If a packet arrives it transits 1054 to arr state. Depending on interrupts received it will transit to 1055 BCOspfLsa, BCMospfLsa, hello_pks state. In future versions, links 1056 coming up or down will also cause a transition. 1058 6.3 BCOspfLsa 1060 Transition to this state from idle state is executed whenever the 1061 condition send_ospf_lsa is true, which happens when the network is 1062 being initialized, and when ospf_lsa_refresh_timout occurs. This 1063 state will create Router, Network, Summary Link State Advertisements 1064 and pack all of them into an Link State Update packet. The Link State 1065 Update Packet is sent to the IP layer with a destination address of 1066 AllSPFRouters. 1068 Figure 10: OSPF and MOSPF process model on routers 1070 6.4 BCMospfLsa 1072 Transition to this state from idle state is executed whenever the 1073 condition send_mospf_lsa is true. This state will create Group 1074 Membership Link State Advertisement and pack them into Mospf Link 1075 State Update Packet. This Mospf Link State Update Packet is sent 1076 to IP layer with a destination address of AllSPFRouters. 1078 6.5 arr 1080 The arr state checks the type of packet that is received upon a 1081 packet arrival. It calls the following functions depending on the 1082 protocol Id of the packet received. 1084 a. OspfPkPro: Depending on the type of OSPF/MOSPF packet received the 1085 function calls the following functions. 1087 1. HelloPk_pro: This function is called whenever a hello packet is 1088 received. This function updates the router's neighbor information, 1089 which is later used while sending the different LSAs. 1090 2. OspfLsUpdatePk_pro: This function is called when an OSPF LSA 1091 update packet is received (router LSA, network LSA, or summary 1092 LSA). If the Router is an Area Border Router or if the LSA belongs 1093 to the Area whose Area Id is the Routers Area Id, then it is 1094 searched to determine whether this LSA already exists in the Link 1095 State database. If it exists and if the existing LSA's LS Sequence 1096 Number is less than the received LSA's LS Sequence Number the 1097 existing LSA was replaced with the received one. The function 1098 processes the Network LSA only if it is a designated router or 1099 Area Border Router. It processes the Summary LSA only if the 1100 router is a Area Border Router. The function also turns on the 1101 trigger ospfspfcalc which is the condition for the transition from 1102 arr state to ospfspfcalc. 1103 3. MospfLsUpdatePk_pro: This function is called when a MOSPF LSA 1104 update packet is received. It updates the group membership link 1105 state database of the router. 1107 6.6 hello_pks 1109 Hello packets are created and sent with destination address of 1110 AllSPFRouters. Default transition to idle state. 1112 6.7 mospfspfcalc 1114 The following functions are used to calculate the shortest path 1115 tree and routing table. This state transit to upstr_node upon 1116 detupstrnode condition. 1118 a. CandListInit: Depending upon the SourceNet of the datagram, the 1119 candidate lists are initialized. 1121 b. MospfCandAddPro: The vertex link is examined and if the other end 1122 of the link is not a stub network and is not already in the 1123 candidate list it is added to the candidate list after calculating 1124 the cost to that vertex. If this other end of the link is already on 1125 the shortest path tree and the calculated cost is less than the one 1126 that shows in the shortest path tree entry update the shortest path 1127 tree to show the calculated cost. 1129 c. MospfSPFTreeCalc: The vertex that is closest to the root that is 1130 in the candidate list is added to the shortest path tree and its link 1131 is considered for possible inclusions in the candidate list. 1133 d. MCRoutetableCalc: Multicast routing table is calculated using the 1134 information of the MOSPF shortest Path tree. 1136 6.8 ospfspfcalc 1138 The following functions are used in this state to calculate the 1139 shortest path tree and using this information the routing table. 1140 Transition to ospfspfcalc state on ospfcalc condition. This is 1141 set to one after processing all functions in the state. 1143 a. OspfCandidateAddPro: This function initializes the candidate list 1144 by examining the link state advertisement of the Router. For each 1145 link in this advertisement, if the other end of the link is a router 1146 or transit network and if it is not already in the shortest-path 1147 tree then calculate the distance between these vertices. If the 1148 other end of this link is not already on the candidate list or if 1149 the distance calculated is less than the value that appears for 1150 this other end add the other end of the link to candidate list. 1152 b. OspfSPTreeBuild: This function pulls each vertex from the 1153 candidate list that is closest to the root and adds it to the 1154 shortest path tree. In doing so it deletes the vertex from the 1155 candidate list. This function continues to do this until the 1156 candidate list is empty. 1158 c. OspfStubLinkPro: In this procedure the stub networks are added to 1159 shortest path tree. 1161 d. OspfSummaryLinkPro: If the router is an Area Border Router the 1162 summary links that it has received is examined. The route to the Area 1163 border router advertising this summary LSA is examined in the routing 1164 table. If one is found a routing table update is done by adding the 1165 route to the network specified in the summary LSA and the cost to 1166 this route is sum of the cost to area border router advertising this 1167 and the cost to reach this network from that area border router. 1169 e. RoutingTableCalc: This function updates the routing table by 1170 examining the shortest path tree data structure. 1172 6.9 upstr_node 1174 This state does not do anything in the present model. It transitions 1175 to DABRA state. 1177 6.10 DABRA 1179 If the router is an Area Border Router and the area is the source 1180 area then a DABRA message is constructed and send to all the 1181 downstream areas. Default transition to idle state. 1183 7. DVMRP Model 1185 The DVMRP model is implemented based on reference [6], DVMRP 1186 version 3. There are nine states. The DVMRP process only exists on 1187 Routers. Figure 11 shows the states of the DVMRP process. 1189 7.1 Init 1191 Initialize all variables, routing table and forwarding table and load 1192 the simulation parameters. It will transit to the Idle state after 1193 completing all the initializations. 1195 7.2 Idle 1197 The simulation waits for the next scheduled event or remotely 1198 invoked event in the Idle State and transit to the state 1199 accordingly. In the DVMRP model, Idle State has transitions 1200 to Probe_Send, Report_Send, Prune_Send, Graft_Send, Arr_Pkt, 1201 Route_Calc and Timer states. 1203 Figure 11. DVMRP process on routers 1205 7.3 Probe_Send State 1207 A DVMRP router sends Probe messages periodically to inform other 1208 DVMRP routers that it is operational. A DVMRP router lists all its 1209 known neighbors' addresses in the Probe message and sends it to 1210 All-DVMRP-Routers address. The routers will not process any message 1211 that comes from an unknown neighbor. 1213 7.4 Report_Send 1215 To avoid sending Report at the same time for all DVMRP routers, the 1216 interval between two Report messages is uniformly distributed with 1217 average 60 seconds. The router lists source router's address, 1218 upstream router's address and metric of all sources into the Report 1219 message and sends it to All-DVMRP-Routers address. 1221 7.5 Prune_Send 1222 The transition to this state is triggered by the local IGMP process. 1223 When a host on the subnetwork drops from a group, the IGMP process 1224 asks DVMRP to see if the branch should be pruned. 1226 The router obtains the group number from IGMP and checks the IP 1227 Multicast membership table to find out if there is any group member 1228 that is still in the group. If the router determines that the last 1229 host has resigned, it goes through the entire forwarding table to 1230 locate all sources for that group. The router sends Prune message, 1231 containing source address, group address and prune lifetime, 1232 separately for each (source, group) pair and records the row as 1233 pruned in the forwarding table. 1235 7.6 Graft_Send 1237 The transition to this state is triggered by the local IGMP process. 1238 Once a multicast delivery has been pruned, Graft messages are 1239 necessary when a host in the local subnetwork joins into the group. A 1240 Graft message sent to the upstream router should be acknowledged hop 1241 by hop to the root of the tree guaranteeing end-to-end delivery. 1243 The router obtains the group number from IGMP and go through the 1244 forwarding table to locate all traffic sources for that group. A 1245 Graft message will be sent to the upstream router with the source 1246 address and group address for each (source, group) pair. The router 1247 also setups a timer for each Graft message waiting for an 1248 acknowledgement. 1250 7.7 Arr_Pkt 1252 All DVMRP control messages will be sent up to DVMRP layer by IP. The 1253 function performed by the DVMRP layer depends upon the type of the 1254 message received. 1256 a. Probe message: The router checks the neighbors' list in Probe 1257 message, update its their status to indicate the availability of its 1258 neighbors. 1260 b. Report message: Based on exchanging report messages, the routers 1261 can build the Multicast delivery tree rooted at each source. A 1262 function called ReportPkPro will be called to handle all possible 1263 situations when receiving a report message. If the message is a poison 1264 reverse report and not coming from one of the dependent downstreams, 1265 the incoming interface should be added to the router's downstream 1266 list. If the message is not a poison reverse report but it came from 1267 one of the downstreams, this interface should be deleted from the 1268 downstreams list. And then, the router compared the metric got from 1269 the message with the metric of the current upstream, if the new metric 1270 is less than the older one, the router's upstream interface should be 1271 updated. 1273 c. Prune message: The router extracts the source address, group 1274 address and prune lifetime, marks the incoming interface as pruned 1275 in the dependent downstream list of the (source, group) pair. If all 1276 downstream interfaces have been pruned, the router will send a prune 1277 message to its upstream. 1279 d. Graft message: The router extracts the source and group address, 1280 active the incoming interface in the dependent downstream list of the 1281 (source, group) pair. If the (source, group) pair has been pruned, 1282 the router will reconnect the branch by sending a graft message to 1283 its upstream interface. 1285 e. Graft Acknowledge message: The router extracts the source and 1286 group address, clear the graft message timer of the (source, group) 1287 pair in the forwarding table. 1289 7.8 Route_Calc 1291 The transition to this state is triggered by the local IP process. 1292 Once the IP receives a packet, it will fire a remote interrupt to the 1293 DVMRP and ask the DVMRP to prepare the outgoing interfaces for the 1294 packet. The DVMRP process obtains the packet's source address and 1295 group address from the IP and checks the (source, group) pairs in the 1296 forwarding table to decide the branches that have the group members 1297 on the Multicast delivery tree. The Group Membership Table on IP 1298 will be updated based on this knowledge. 1300 7.9 Timer 1302 This state is activated once every second. It checks the forwarding 1303 table, if the Graft message acknowledgment timer is expired, The 1304 router will retransmit the Graft message to the upstream. If the 1305 prune state lifetime timer is expired, the router will graft this 1306 interface so that the downstream router can receive the packets to 1307 the group again. The router also checks if the (source, group) pair 1308 is pruned by the upstream router, if so, it will send a graft message 1309 to the upstream interface. 1311 8. Simulation performance 1313 Our simulations of three network models with MOSPF routing have 1314 showed good Scalability of the protocol. The running platform we used 1315 is a SGI Octane Station with 512 MB main memory and MIPS R10000 CPU, 1316 Rev 2.7. Here we list the real running time of each model along with 1317 its major elements and the packet inter-arrival times for the streams 1318 generated in the hosts. 1320 Simulated Debug Model Intermediate Model Large Model 1321 time 11 Routers 42 routers 86 routers 1322 12 Hosts 48 hosts 96 hosts 1324 Reserve Data Reserve Data Reserve Data 1325 0.01s 0.02s 0.02s 1326 Best-effort Data Best-effort Data Best-effort Data 1327 0.01s 0.025s 0.025s 1329 100 s 3 hours 14 hours 30 hours 1330 200 s 7 hours 30 hours - - - 1332 9. Future work 1334 We hope to receive assistance from the IPmc/RSVP development 1335 community within the IETF in validating and refining this model. 1336 We believe it will be a useful tool for predicting the behavior 1337 of RSVP-capable systems. 1339 10. References 1341 [1] Deering, S. "Host Requirements for IP Multicasting", RFC 1112, 1342 August 1989 1344 [2] Braden, R. et. al., "Resource Reservation Protocol (RSVP) -- 1345 Version 1 Functional Specification", RFC 2205, September 1997 1347 [3] Wroclawski, J., "The Use of RSVP with IETF Integrated Services", 1348 RFC 2210, September 1997 1350 [4] MIL3 Inc., "OPNET Modeler Tutorial Version 3�, Washington, DC, 1351 1997 1353 [5] Moy, J., �Multicast Extensions to OSPF�, RFC1584, March 1994 1355 [6] Pusateri, T., �Distance Vector Multicast Routing Protocol�, 1356 draft-ietf-idmr-dvmrp-v3-06, work in progress, March 1998 1358 Authors' Addresses 1360 J. Mark Pullen 1361 C3I Center/Computer Science 1362 Mail Stop 4A5 1363 George Mason University 1364 Fairfax, VA 22032 1365 mpullen@gmu.edu 1367 Ravi Malghan 1368 3141 Fairview Park Drive, Suite 700 1369 Falls Church VA 22042 1370 rmalghan@bacon.gmu.edu 1372 Lava K. Lavu 1373 Bay Networks 1374 600 Technology Park Dr. 1375 Billerica, MA 01821 1376 llavu@bacon.gmu.edu 1378 Gang Duan 1379 Oracle Co. 1380 Redwood Shores, CA 94065 1381 gduan@us.oracle.com 1383 Jiemei Ma 1384 Newbridge Networks Inc. 1385 593 Herndon Parkway 1386 Herndon, VA 20170 1387 jma@newbridge.com 1389 Hoon Nah 1390 C3I Center 1391 Mail Stop 4B5 1392 George Mason University 1393 Fairfax, VA 22030 1394 hnah@bacon.gmu.edu 1396 Expiration: 30 March 1999