idnits 2.17.1 draft-widjaja-mpls-vc-merge-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-26) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 10 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 11 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 92 has weird spacing: '...merging is...' == Line 294 has weird spacing: '...he data colle...' == Line 357 has weird spacing: '..._s, and flip ...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 1998) is 9325 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' ** Downref: Normative reference to an Informational RFC: RFC 2105 (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 2098 (ref. '3') -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '5' -- Possible downref: Non-RFC (?) normative reference: ref. '6' ** Obsolete normative reference: RFC 1483 (ref. '7') (Obsoleted by RFC 2684) -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' Summary: 12 errors (**), 0 flaws (~~), 6 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Indra Widjaja 3 Fujitsu Network Communications 4 Internet Draft Anwar Elwalid 5 Expired in six months Bell Labs, Lucent Technologies 6 October 1998 8 Performance Issues in VC-Merge Capable ATM LSRs 9 11 Status of this Memo 13 This document is an Internet Draft. Internet Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its Areas, 15 and its Working Groups. Note that other groups may also distribute 16 working documents as Internet Drafts. 18 Internet Drafts are draft documents valid for a maximum of six 19 months. Internet Drafts may be updated, replaced, or obsoleted by 20 other documents at any time. It is not appropriate to use Internet 21 Drafts as reference material or to cite them other than as a "working 22 draft" or "work in progress." 24 Please check the 1id-abstracts.txt listing contained in the 25 internet-drafts Shadow Directories on nic.ddn.mil, nnsc.nsf.net, 26 nic.nordu.net, ftp.nisc.sri.com, or munnari.oz.au to learn the 27 current status of any Internet Draft. 29 Abstract 31 VC merging allows many routes to be mapped to the same VC label, 32 thereby providing a scalable mapping method that can support 33 thousands of edge routers. VC merging requires reassembly buffers so 34 that cells belonging to different packets intended for the same 35 destination do not interleave with each other. This document 36 investigates the impact of VC merging on the additional buffer 37 required for the reassembly buffers and other buffers. The main 38 result indicates that VC merging incurs a minimal overhead compared 39 to non-VC merging in terms of additional buffering. Moreover, the 40 overhead decreases as utilization increases, or as the traffic 41 becomes more bursty. 43 1.0 Introduction 45 Recently some radical proposals to overhaul the legacy router 46 architectures have been presented by several organizations, notably 47 the Ipsilon's IP switching [1], Cisco's Tag switching [2], Toshiba's 48 CSR [3], IBM's ARIS [4], and IETF's MPLS [5]. Although the details 49 of their implementations vary, there is one fundamental concept that 50 is shared by all these proposals: map the route information to short 51 fixed-length labels so that next-hop routers can be determined by 52 direct indexing. 54 Although any layer 2 switching mechanism can in principle be applied, 55 the use of ATM switches in the backbone network is believed to be a 56 very attractive solution since ATM hardware switches have been exten- 57 sively studied and are widely available in many different architec- 58 tures. In this document, we will assume that layer 2 switching uses 59 ATM technology. In this case, each IP packet may be segmented to mul- 60 tiple 53-byte cells before being switched. Traditionally, AAL 5 has 61 been used as the encapsulation method in data communications since it 62 is simple, efficient, and has a powerful error detection mechanism. 63 For the ATM switch to forward incoming cells to the correct outputs, 64 the IP route information needs to be mapped to ATM labels which are 65 kept in the VPI or/and VCI fields. The relevant route information 66 that is stored semi-permanently in the IP routing table contains the 67 tuple (destination, next-hop router). The route information changes 68 when the network state changes and this typically occurs slowly, 69 except during transient cases. The word ``destination'' typically 70 refers to the destination network (or CIDR prefix), but can be 71 readily generalized to (destination network, QoS), (destination host, 72 QoS), or many other granularities. In this document, the destination 73 can mean any of the above or other possible granularities. 75 Several methods of mapping the route information to ATM labels exist. 76 In the simplest form, each source-destination pair is mapped to a 77 unique VC value at a switch. This method, called the non-VC merging 78 case, allows the receiver to easily reassemble cells into respective 79 packets since the VC values can be used to distinguish the senders. 80 However, if there are n sources and destinations, each switch is 81 potentially required to manage O(n^2) VC labels for full-meshed con- 82 nectivity. For example, if there are 1,000 sources/destinations, 83 then the size of the VC routing table is on the order of 1,000,000 84 entries. Clearly, this method is not scalable to large networks. In 85 the second method called VP merging, the VP labels of cells that are 86 intended for the same destination would be translated to the same 87 outgoing VP value, thereby reducing VP consumption downstream. For 88 each VP, the VC value is used to identify the sender so that the 89 receiver can reconstruct packets even though cells from different 90 packets are allowed to interleave. Each switch is now required to 91 manage O(n) VP labels - a considerable saving from O(n^2). Although 92 the number of label entries is considerably reduced, VP merging is 93 limited to only 4,096 entries at the network-to-network interface. 95 Moreover, VP merging requires coordination of the VC values for a 96 given VP, which introduces more complexity. A third method, called 97 VC merging, maps incoming VC labels for the same destination to the 98 same outgoing VC label. This method is scalable and does not have the 99 space constraint problem as in VP merging. With VC merging, cells for 100 the same destination is indistinguishable at the output of a switch. 101 Therefore, cells belonging to different packets for the same destina- 102 tion cannot interleave with each other, or else the receiver will not 103 be able to reassemble the packets. With VC merging, the boundary 104 between two adjacent packets are identified by the ``End-of-Packet'' 105 (EOP) marker used by AAL 5. 107 It is worthy to mention that cell interleaving may be allowed if we 108 use the AAL 3/4 Message Identifier (MID) field to identify the sender 109 uniquely. However, this method has some serious drawbacks as: 1) the 110 MID size may not be sufficient to identify all senders, 2) the encap- 111 sulation method is not efficient, 3) the CRC capability is not as 112 powerful as in AAL 5, and 4) AAL 3/4 is not as widely supported as 113 AAL 5 in data communications. 115 Before VC merging with no cell interleaving can be qualified as the 116 most promising approach, two main issues need to be addressed. 117 First, the feasibility of an ATM switch that is capable of merging 118 VCs needs to be investigated. Second, there is widespread concern 119 that the additional amount of buffering required to implement VC 120 merging is excessive and thus making the VC-merging method impracti- 121 cal. Through analysis and simulation, we will dispel these concerns 122 in this document by showing that the additional buffer requirement 123 for VC merging is minimal for most practical purposes. Other perfor- 124 mance related issues such additional delay due to VC merging will 125 also be discussed. 127 2.0 A VC-Merge Capable MPLS Switch Architecture 129 In principle, the reassembly buffers can be placed at the input or 130 output side of a switch. If they are located at the input, then the 131 switch fabric has to transfer all cells belonging to a given packet 132 in an atomic manner since cells are not allowed to interleave. This 133 requires the fabric to perform frame switching which is not flexible 134 nor desirable when multiple QoSs need to be supported. On the other 135 hand, if the reassembly buffers are located at the output, the switch 136 fabric can forward each cell independently as in normal ATM switch- 137 ing. Placing the reassembly buffers at the output makes an output- 138 buffered ATM switch a natural choice. 140 We consider a generic output-buffered VC-merge capable MPLS switch 141 with VCI translation performed at the output. Other possible archi- 142 tectures may also be adopted. The switch consists of a non-blocking 143 cell switch fabric and multiple output modules (OMs), each is associ- 144 ated with an output port. Each arriving ATM cell is appended with 145 two fields containing an output port number and an input port number. 146 Based on the output port number, the switch fabric forwards each cell 147 to the correct output port, just as in normal ATM switches. If VC 148 merging is not implemented, then the OM consists of an output buffer. 149 If VC merging is implemented, the OM contains a number of reassembly 150 buffers (RBs), followed by a merging unit, and an output buffer. Each 151 RB typically corresponds to an incoming VC value. It is important to 152 note that each buffer is a logical buffer, and it is envisioned that 153 a common pool of memory for the reassembly buffers and the output 154 buffer. 156 The purpose of the RB is to ensure that cells for a given packet do 157 not interleave with other cells that are merged to the same VC. This 158 mechanism (called store-and-forward at the packet level) can be 159 accomplished by storing each incoming cell for a given packet at the 160 RB until the last cell of the packet arrives. When the last cell 161 arrives, all cells in the packet are transferred in an atomic manner 162 to the output buffer for transmission to the next hop. It is worth 163 pointing out that performing a cut-through mode at the RB is not 164 recommended since it would result in wastage of bandwidth if the sub- 165 sequent cells are delayed. During the transfer of a packet to the 166 output buffer, the incoming VCI is translated to the outgoing VCI by 167 the merging unit. To save VC translation table space, different 168 incoming VCIs are merged to the same outgoing VCI during the transla- 169 tion process if the cells are intended for the same destination. If 170 all traffic is best-effort, full-merging where all incoming VCs des- 171 tined for the same destination network are mapped to the same outgo- 172 ing VC, can be implemented. However, if the traffic is composed of 173 multiple classes, it is desirable to implement partial merging, where 174 incoming VCs destined for the same (destination network, QoS) are 175 mapped to the same outgoing VC. 177 Regardless of whether full merging or partial merging is implemented, 178 the output buffer may consist of a single FIFO buffer or multiple 179 buffers each corresponds to a destination network or (destination 180 network, QoS). If a single output buffer is used, then the switch 181 essentially tries to emulate frame switching. If multiple output 182 buffers are used, VC merging is different from frame switching since 183 cells of a given packet are not bound to be transmitted back-to-back. 184 In fact, fair queueing can be implemented so that cells from their 185 respective output buffers are served according to some QoS require- 186 ments. Note that cell-by-cell scheduling can be implemented with VC 187 merging, whereas only packet-by-packet scheduling can be implemented 188 with frame switching. In summary, VC merging is more flexible than 189 frame switching and supports better QoS control. 191 3.0 Performance Investigation of VC Merging 193 This section compares the VC-merging switch and the non-VC merging 194 switch. The non-VC merging switch is analogous to the traditional 195 output-buffered ATM switch, whereby cells of any packets are allowed 196 to interleave. Since each cell is a distinct unit of information, 197 the non-VC merging switch is a work-conserving system at the cell 198 level. On the other hand, the VC-merging switch is non-work conserv- 199 ing so its performance is always lower than that of the non-VC merg- 200 ing switch. The main objective here is to study the effect of VC 201 merging on performance implications of MPLS switches such as addi- 202 tional delay, additional buffer, etc., subject to different traffic 203 conditions. 205 In the simulation, the arrival process to each reassembly buffer is 206 an independent ON-OFF process. Cells within an ON period form a sin- 207 gle packet. During an OFF periof, the slots are idle. Note that the 208 ON-OFF process is a general process that can model any traffic pro- 209 cess. 211 3.1 Effect of Utilization on Additional Buffer Requirement 213 We first investigate the effect of switch utilization on the addi- 214 tional buffer requirement for a given overflow probability. To carry 215 the comparison, we analyze the VC-merging and non-VC merging case 216 when the average packet size is equal to 10 cells, using geometri- 217 cally distributed packet sizes and packet interarrival times, with 218 cells of a packet arriving contiguously (later, we consider other 219 distributions). The results show, as expected, the VC-merging switch 220 requires more buffers than the non-VC merging switch. When the utili- 221 zation is low, there may be relatively many incomplete packets in the 222 reassembly buffers at any given time, thus wasting storage resource. 223 For example, when the utilization is 0.3, VC merging requires an 224 additional storage of about 45 cells to achieve the same overflow 225 probability. However, as the utilization increases to 0.9, the addi- 226 tional storage to achieve the same overflow probability drops to 227 about 30 cells. The reason is that when traffic intensity increases, 228 the VC-merging system becomes more work-conserving. 230 It is important to note that ATM switches must be dimensioned at high 231 utilization value (in the range of 0.8-0.9) to withstand harsh 232 traffic conditions. At the utilization of 0.9, a VC-merge ATM switch 233 requires a buffer of size 976 cells to provide an overflow 234 probability of 10^{-5}, whereas an non-VC merge ATM switch requires a 235 buffer of size 946. These numbers translate the additional buffer 236 requirement for VC merging to about 3% - hardly an additional buffer- 237 ing cost. 239 3.2 Effect of Packet Size on Additional Buffer Requirement 241 We now vary the average packet size to see the impact on the buffer 242 requirement. We fix the utilization to 0.5 and use two different 243 average packet sizes; that is, B=10 and B=30. To achieve the same 244 overflow probability, VC merging requires an additional buffer of 245 about 40 cells (or 4 packets) compared to non-VC merging when B=10. 246 When B=30, the additional buffer requirement is about 90 cells (or 3 247 packets). As expected, the additional buffer requirement in terms of 248 cells increases as the packet size increases. However, the additional 249 buffer requirement is roughly constant in terms of packets. 251 3.3 Additional Buffer Overhead Due to Packet Reassembly 253 There may be some concern that VC merging may require too much 254 buffering when the number of reassembly buffers increases, which 255 would happen if the switch size is increased or if cells for packets 256 going to different destinations are allowed to interleave. We will 257 show that the concern is unfounded since buffer sharing becomes more 258 efficient as the number of reassembly buffers increases. 260 To demonstrate our argument, we consider the overflow probability for 261 VC merging for several values of reassembly buffers (N); i.e., N=4, 262 8, 16, 32, 64, and 128. The utilization is fixed to 0.8 for each 263 case, and the average packet size is chosen to be 10. For a given 264 overflow probability, the increase in buffer requirement becomes less 265 pronounced as N increases. Beyond a certain value (N=32), the 266 increase in buffer requirement becomes insignificant. The reason is 267 that as N increases, the traffic gets thinned and eventually 268 approaches a limiting process. 270 3.4 Effect of Interarrival time Distribution on Additional Buffer 272 We now turn our attention to different traffic processes. First, we 273 use the same ON period distribution and change the OFF period distri- 274 bution from geometric to hypergeometric which has a larger Square 275 Coefficient of Variation (SCV), defined to be the ratio of the vari- 276 ance to the square of the mean. Here we fix the utilization at 0.5. 277 As expected, the switch performance degrades as the SCV increases in 278 both the VC-merging and non-VC merging cases. To achieve a buffer 279 overflow probability of 10^{-4}, the additional buffer required is 280 about 40 cells when SCV=1, 26 cells when SCV=1.5, and 24 cells when 281 SCV=2.6. The result shows that VC merging becomes more work- 282 conserving as SCV increases. In summary, as the interarrival time 283 between packets becomes more bursty, the additional buffer require- 284 ment for VC merging diminishes. 286 3.5 Effect of Internet Packets on Additional Buffer Requirement 288 Up to now, the packet size has been modeled as a geometric distribu- 289 tion with a certain parameter. We modify the packet size distribu- 290 tion to a more realistic one for the rest of this document. Since 291 the initial deployment of VC-merge capable ATM switches is likely to 292 be in the core network, it is more realistic to consider the packet 293 size distribution in the Wide Area Network. To this end, we refer to 294 the data given in [6]. The data collected on Feb 10, 1996, in FIX- 295 West network, is in the form of probability mass function versus 296 packet size in bytes. Data collected at other dates closely resemble 297 this one. 299 The distribution appears bi-modal with two big masses at 40 bytes 300 (about a third) due to TCP acknowledgment packets, and 552 bytes 301 (about 22 percent) due to Maximum Transmission Unit (MTU) limitations 302 in many routers. Other prominent packet sizes include 72 bytes (about 303 4.1 percent), 576 bytes (about 3.6 percent), 44 bytes (about 3 per- 304 cent), 185 bytes (about 2.7 percent), and 1500 bytes (about 1.5 per- 305 cent) due to Ethernet MTU. The mean packet size is 257 bytes, and 306 the variance is 84,287 bytes^2. Thus, the SCV for the Internet packet 307 size is about 1.1. 309 To convert the IP packet size in bytes to ATM cells, we assume AAL 5 310 using null encapsulation where the additional overhead in AAL 5 is 8 311 bytes long [7]. Using the null encapsulation technique, the average 312 packet size is about 6.2 ATM cells. 314 We examine the buffer overflow probability against the buffer size 315 using the Internet packet size distribution. The OFF period is 316 assumed to have a geometric distribution. Again, we find that the 317 same behavior as before, except that the buffer requirement drops 318 with Internet packets due to smaller average packet size. 320 3.6 Effect of Correlated Interarrival Times on Additional Buffer 321 Requirement 323 To model correlated interarrival times, we use the DAR(p) process 324 (discrete autoregressive process of order p) [8], which has been used 325 to accurately model video traffic (Star Wars movie) in [9]. The 326 DAR(p) process is a p-th order (lag-p) discrete-time Markov chain. 327 The state of the process at time n depends explicitly on the states 328 at times (n-1), ..., (n-p). 330 We examine the overflow probability for the case where the interar- 331 rival time between packets is geometric and independent, and the case 332 where the interarrival time is geometric and correlated to the previ- 333 ous one with coefficient of correlation equal to 0.9. The empirical 334 distribution of the Internet packet size from the last section is 335 used. The utilization is fixed to 0.5 in each case. Although, the 336 overflow probability increases as p increases, the additional amount 337 of buffering actually decreases for VC merging as p, or equivalently 338 the correlation, increases. One can easily conclude that higher- 339 order correlation or long-range dependence, which occurs in self- 340 similar traffic, will result in similar qualitative performance. 342 3.7 Slow Sources 344 The discussions up to now have assumed that cells within a packet 345 arrive back-to-back. When traffic shaping is implemented, adjacent 346 cells within the same packet would typically be spaced by idle slots. 347 We call such sources as "slow sources". Adjacent cells within the 348 same packet may also be perturbed and spaced as these cells travel 349 downstream due to the merging and splitting of cells at preceding 350 nodes. 352 Here, we assume that each source transmits at the rate of r_s (0 < 353 r_s < 1), in units of link speed, to the ATM switch. To capture the 354 merging and splitting of cells as they travel in the network, we will 355 also assume that the cell interarrival time within a packet is ran- 356 domly perturbed. To model this perturbation, we stretch the original 357 ON period by 1/r_s, and flip a Bernoulli coin with parameter r_s 358 during the stretched ON period. In other words, a slot would contain 359 a cell with probability r_s, and would be idle with probability 1-r_s 360 during the ON period. By doing so, the average packet size remains 361 the same as r_s is varied. We simulated slow sources on the VC-merge 362 ATM switch using the Internet packet size distribution with r_s=1 and 363 r_s=0.2. The packet interarrival time is assumed to be geometrically 364 distributed. Reducing the source rate in general reduces the 365 stresses on the ATM switches since the traffic becomes smoother. 366 With VC merging, slow sources also have the effect of increasing the 367 reassembly time. At utilization of 0.5, the reassembly time is more 368 dominant and causes the slow source (with r_s=0.2) to require more 369 buffering than the fast source (with r_s=1). At utilization of 0.8, 370 the smoother traffic is more dominant and causes the slow source 371 (with r_s=0.2) to require less buffering than the fast source (with 372 r_s=1). This result again has practical consequences in ATM switch 373 design where buffer dimensioning is performed at reasonably high 374 utilization. In this situation, slow sources only help. 376 3.8 Packet Delay 378 It is of interest to see the impact of cell reassembly on packet 379 delay. Here we consider the delay at one node only; end-to-end delays 380 are subject of ongoing work. We define the delay of a packet as the 381 time between the arrival of the first cell of a packet at the switch 382 and the departure of the last cell of the same packet. We study the 383 average packet delay as a function of utilization for both VC-merging 384 and non-VC merging switches for the case r_s=1 (back-to-back cells in 385 a packet). Again, the Internet packet size distribution is used to 386 adopt the more realistic scenario. The interarrival time of packets 387 is geometrically distributed. Although the difference in the worst- 388 case delay between VC-merging and non-VC merging can be theoretically 389 very large, we consistently observe that the difference in average 390 delays of the two systems to be consistently about one average packet 391 time for a wide range of utilization. The difference is due to the 392 average time needed to reassemble a packet. 394 To see the effect of cell spacing in a packet, we again simulate the 395 average packet delay for r_s=0.2. We observe that the difference in 396 average delays of VC merging and non-VC merging increases to a few 397 packet times (approximately 20 cells at high utilization). It should 398 be noted that when a VC-merge capable ATM switch reassembles packets, 399 in effect it performs the task that the receiver has to do otherwise. 400 From practical point-of-view, an increase in 20 cells translates to 401 about 60 micro seconds at OC-3 link speed. This additional delay 402 should be insignificant for most applications. 404 4.0 Security Considerations 406 There are no security considerations directly related to this docu- 407 ment since the document is concerned with the performance implica- 408 tions of VC merging. There are also no known security considerations 409 as a result of the proposed modification of a legacy ATM LSR to 410 incorporate VC merging. 412 5.0 Discussion 414 This document has investigated the impacts of VC merging on the 415 performance of an ATM LSR. We experimented with various traffic 416 processes to understand the detailed behavior of VC-merge capable ATM 417 LSRs. Our main finding indicates that VC merging incurs a minimal 418 overhead compared to non-VC merging in terms of additional buffering. 419 Moreover, the overhead decreases as utilization increases, or as the 420 traffic becomes more bursty. This fact has important practical 421 consequences since switches are dimensioned for high utilization and 422 stressful traffic conditions. We have considered the case where the 423 output buffer uses a FIFO scheduling. However, based on our investi- 424 gation on slow sources, we believe that fair queueing will not intro- 425 duce a significant impact on the additional amount of buffering. 426 Others may wish to investigate this further. 428 6.0 Acknowledgement 430 The authors thank Debasis Mitra for his penetrating questions during 431 the internal talks and discussions. 433 7.0 References 435 [1] P. Newman, Tom Lyon and G. Minshall, 436 ``Flow Labelled IP: Connectionless ATM Under IP,'' 437 in Proceedings of INFOCOM'96, San-Francisco, Apr. 1996. 439 [2] Y. Rekhter, B. Davie, D. Katz, E. Rosen and 440 G. Swallow, ``Cisco Systems' Tag Switching Architecture Overview,'' 441 RFC 2105, Feb. 1997. 443 [3] Y. Katsube, K. Nagami and H. Esaki, 444 ``Toshiba's Router Architecture Extensions for ATM: Overview,'' 445 RFC 2098, Feb. 1997. 447 [4] A. Viswanathan, N. Feldman, R. Boivie and R. Woundy, 448 ``ARIS: Aggregate Route-Based IP Switching,'' 449 Internet Draft , Mar. 1997. 451 [5] R. Callon, P. Doolan, N. Feldman, A. Fredette, 452 G. Swallow and A. Viswanathan, 453 ``A Framework for Multiprotocol Label Switching,'' 454 Internet Draft , Nov 1997. 456 [6] WAN Packet Size Distribution, 457 http://www.nlanr.net/NA/Learn/packetsizes.html. 459 [7] J. Heinanen, 460 ``Multiprotocol Encapsulation over ATM Adaptation Layer 5,'' 461 RFC 1483, Jul. 1993. 463 [8] P. Jacobs and P. Lewis, 464 ``Discrete Time Series Generated by Mixtures III: 465 Autoregressive Processes (DAR(p)),'' Technical Report NPS55-78-022, 466 Naval Postgraduate School, 1978. 468 [9] B.K. Ryu and A. Elwalid, 469 ``The Importance of Long-Range Dependence of VBR Video Traffic 470 in ATM Traffic Engineering,'' 471 ACM SigComm'96, Stanford, CA, pp. 3-14, Aug. 1996. 473 Author Information: 475 Indra Widjaja 476 Fujitsu Network Communications 477 4403 Bland Road 478 Raleigh, NC 27609, USA 479 Phone: 919 790-2037 480 Email: indra.widjaja@fnc.fujitsu.com 482 Anwar Elwalid 483 Bell Labs Lucent Technologies 484 Murray Hill, NJ 07974, USA 485 Phone: 908 582-7589 486 Email: anwar@lucent.com