idnits 2.17.1 draft-widjaja-mpls-vc-merge-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-26) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 10 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 11 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 95 has weird spacing: '...merging is no...' == Line 131 has weird spacing: '.... This requi...' == Line 283 has weird spacing: '...he data colle...' == Line 332 has weird spacing: '...spaced by idl...' == Line 341 has weird spacing: '.../r, and flip ...' == (1 more instance...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 1997) is 9782 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' ** Downref: Normative reference to an Informational RFC: RFC 2105 (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 2098 (ref. '3') -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '5' -- Possible downref: Non-RFC (?) normative reference: ref. '6' ** Obsolete normative reference: RFC 1483 (ref. '7') (Obsoleted by RFC 2684) -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' Summary: 11 errors (**), 0 flaws (~~), 9 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 MPLS Working Group Indra Widjaja 3 Fujitsu Network Communications 4 Internet Draft Anwar Elwalid 5 Expiration: Dec 1997 Bell Labs, Lucent Technologies 6 July 1997 8 Performance Issues in VC-Merge Capable MPLS Switches 9 11 Status of this Memo 13 This document is an Internet Draft. Internet Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its Areas, 15 and its Working Groups. Note that other groups may also distribute 16 working documents as Internet Drafts. 18 Internet Drafts are draft documents valid for a maximum of six 19 months. Internet Drafts may be updated, replaced, or obsoleted by 20 other documents at any time. It is not appropriate to use Internet 21 Drafts as reference material or to cite them other than as a "working 22 draft" or "work in progress." 24 Please check the 1id-abstracts.txt listing contained in the 25 internet-drafts Shadow Directories on nic.ddn.mil, nnsc.nsf.net, 26 nic.nordu.net, ftp.nisc.sri.com, or munnari.oz.au to learn the 27 current status of any Internet Draft. 29 Abstract 31 VC merging allows many routes to be mapped to the same VC label, 32 thereby providing a scalable mapping method that can support tens of 33 thousands of edge routers. VC merging requires reassembly buffers so 34 that cells belonging to different packets intended for the same 35 destination do not interleave with each other. This document 36 investigates the impact of VC merging on the additional buffer 37 required for the reassembly buffers and other buffers. The main 38 result indicates that VC merging incurs a minimal overhead compared 39 to non-VC merging in terms of additional buffering. Moreover, the 40 overhead decreases as utilization increases, or as the traffic 41 becomes more bursty. 43 1. Introduction 45 Recently some radical proposals to overhaul the legacy router architec- 46 tures have been presented by several organizations, notably the 47 Ipsilon's IP switching [1], Cisco's Tag switching [2], Toshiba's CSR 48 [3], IBM's ARIS [4], and IETF's MPLS [5]. Although the details of their 49 implementations vary, there is one fundamental concept that is shared by 50 all these proposals: map the route information to short fixed-length 51 labels so that next-hop routers can be determined quickly through index- 52 ing rather than some type of searching (or matching longest prefixes). 54 Although any layer 2 switching mechanism can in principle be applied, 55 the use of ATM switches in the backbone network is believed to be the 56 most attractive solution since ATM hardware switches have been exten- 57 sively studied and are widely available in many different architectures. 58 In this document, we will assume that layer 2 switching uses ATM tech- 59 nology. In this case, each IP packet may be segmented to multiple 53- 60 byte cells before being switched. Traditionally, AAL 5 has been used as 61 the encapsulation method in data communications since it is simple, 62 efficient, and has a powerful error detection mechanism. For the ATM 63 switch to forward incoming cells to the correct outputs, the IP route 64 information needs to be mapped to ATM labels which are kept in the VPI 65 or/and VCI fields. The relevant route information that is stored semi- 66 permanently in the IP routing table contains the tuple (destination, 67 next-hop router). The route information changes when the network state 68 changes and this typically occurs slowly, except during transient cases. 69 The word ``destination'' typically refers to the destination network (or 70 CIDR prefix), but can be readily generalized to (destination network, 71 QoS), (destination host, QoS), or many other granularities. In this doc- 72 ument, the destination can mean any of the above or other possible gran- 73 ularities. 75 Several methods of mapping the route information to ATM labels exist. 76 In the simplest form, each source-destination pair is mapped to a unique 77 VC value at a switch. This method, called the non-VC merging case, 78 allows the receiver to easily reassemble cells into respective packets 79 since the VC values can be used to distinguish the senders. However, if 80 there are n sources and destinations, each switch is potentially 81 required to manage O(n^2) VC labels for full-meshed connectivity. For 82 example, if there are 1,000 sources/destinations, then the size of the 83 VC routing table is on the order of 1,000,000 entries. Clearly, this 84 method is not scalable to large networks. In the second method called 85 VP merging, the VP labels of cells that are intended for the same desti- 86 nation would be translated to the same outgoing VP value, thereby reduc- 87 ing VP consumption downstream. For each VP, the VC value is used to 88 identify the sender so that the receiver can reconstruct packets even 89 though cells from different packets are allowed to interleave. For a 90 given destination, the switch would encounter O(e) incoming VP labels , 91 where e is the number of switch ports (typically, 8 to 16) which may 92 depend on the network size (or n). If there are n destinations, each 93 switch would is now required to manage O(e*n) VP labels - a considerable 94 saving from O(n^2). Although the number of label entries is consider- 95 ably reduced, VP merging is not practical since the VP space is limited 96 to only 4,096 entries at the network-to-network interface. A third 97 method, called VC merging, maps incoming VC labels for the same desti- 98 nation to the same outgoing VC label. This method is scalable and does 99 not have the space constraint problem as in VP merging. With VC merging, 100 cells for the same destination is indistinguishable at the output of a 101 switch. Therefore, cells belonging to different packets for the same 102 destination cannot interleave with each other, or else the receiver will 103 not be able to reassemble the packets. With VC merging, the boundary 104 between two adjacent packets are identified by the ``End-of-Packet'' 105 (EOP) marker used by AAL 5. 107 It is worthy to mention that cell interleaving may be allowed if we use 108 the AAL 3/4 Message Identifier (MID) field to identify the sender 109 uniquely. However, this method has some serious drawbacks as: 1) the MID 110 size may not be sufficient to identify all senders, 2) the encapsulation 111 method is not efficient, 3) the CRC capability is not as powerful as in 112 AAL 5, and 4) AAL 3/4 is not as widely supported as AAL 5 in data commu- 113 nications. 115 Before VC merging with no cell interleaving can be qualified as the most 116 promising approach, two main issues need to be addressed. First, the 117 feasibility of an ATM switch that is capable of merging VCs needs to be 118 investigated. Second, there is widespread concern that the additional 119 amount of buffering required to implement VC merging is excessive and 120 thus making the VC-merging method impractical. Through analysis and 121 simulation, we will dispel these concerns in this document by showing 122 that the additional buffer requirement for VC merging is minimal for 123 most practical purposes. Other performance related issues such addi- 124 tional delay due to VC merging will also be discussed. 126 2. A VC-Merge Capable MPLS Switch Architecture 128 In principle, the reassembly buffers can be placed at the input or out- 129 put side of a switch. If they are located at the input, then the switch 130 fabric has to transfer all cells belonging to a given packet in an 131 atomic manner since cells are not allowed to interleave. This requires 132 the fabric to perform frame switching which is not flexible nor desir- 133 able when multiple QoSs need to be supported. On the other hand, if the 134 reassembly buffers are located at the output, the switch fabric can 135 forward each cell independently as in normal ATM switching. Placing the 136 reassembly buffers at the output makes an output-buffered ATM switch a 137 natural choice. 139 We consider a generic output-buffered VC-merge capable MPLS switch with 140 VCI translation performed at the output. Other possible architectures 141 may also be adopted. The switch consists of a non-blocking cell switch 142 fabric and multiple output modules (OMs), each is associated with an 143 output port. Each arriving ATM cell is appended with two fields con- 144 taining an output port number and an input port number. Based on the 145 output port number, the switch fabric forwards each cell to the correct 146 output port, just as in normal ATM switches. If VC merging is not 147 implemented, then the OM consists of an output buffer. If VC merging is 148 implemented, the OM contains a number of reassembly buffers (RBs), fol- 149 lowed by a merging unit, and an output buffer. Each RB typically corre- 150 sponds to an incoming VC value. It is important to note that each buffer 151 is a logical buffer, and it is envisioned that a common pool of memory 152 for the reassembly buffers and the output buffer. 154 The purpose of the RB is to ensure that cells for a given packet do not 155 interleave with other cells that are merged to the same VC. This mecha- 156 nism (called store-and-forward at the packet level) can be accomplished 157 by storing each incoming cell for a given packet at the RB until the 158 last cell of the packet arrives. When the last cell arrives, all cells 159 in the packet are transferred in an atomic manner to the output buffer 160 for transmission to the next hop. It is worth pointing out that perform- 161 ing a cut-through mode at the RB is not recommended since it would 162 result in wastage of bandwidth if the subsequent cells are delayed. 163 During the transfer of a packet to the output buffer, the incoming VCI 164 is translated to the outgoing VCI by the merging unit. To save VC 165 translation table space, different incoming VCIs are merged to the same 166 outgoing VCI during the translation process if the cells are intended 167 for the same destination. If all traffic is best-effort, full-merging 168 where all incoming VCs destined for the same destination network are 169 mapped to the same outgoing VC, can be implemented. However, if the 170 traffic is composed of multiple classes, it is desirable to implement 171 partial merging, where incoming VCs destined for the same (destination 172 network, QoS) are mapped to the same outgoing VC. 174 Regardless of whether full merging or partial merging is implemented, 175 the output buffer may consist of a single FIFO buffer or multiple 176 buffers each corresponds to a destination network or (destination net- 177 work, QoS). If a single output buffer is used, then the switch essen- 178 tially tries to emulate frame switching. If multiple output buffers are 179 used, VC merging is different from frame switching since cells of a 180 given packet are not bound to be transmitted back-to-back. In fact, 181 fair queueing can be implemented so that cells from their respective 182 output buffers are served according to some QoS requirements. Note that 183 cell-by-cell scheduling can be implemented with VC merging, whereas only 184 packet-by-packet scheduling can be implemented with frame switching. In 185 summary, VC merging is more flexible than frame switching and supports 186 better QoS control. 188 3. Performance Investigation of VC Merging 190 This section compares the VC-merging switch and the non-VC merging 191 switch. The non-VC merging switch is analogous to the traditional 192 output-buffered ATM switch, whereby cells of any packets are allowed to 193 interleave. Since each cell is a distinct unit of information, the 194 non-VC merging switch is a work-conserving system at the cell level. On 195 the other hand, the VC-merging switch is non-work conserving so its per- 196 formance is always lower than that of the non-VC merging switch. The 197 main objective here is to study the effect of VC merging on performance 198 implications of MPLS switches such as additional delay, additional 199 buffer, etc., subject to different traffic conditions. 201 In the simulation, the arrival process to each reassembly buffer is an 202 independent ON-OFF process. Cells within an ON period form a single 203 packet. During an OFF periof, the slots are idle. 205 3.1 Effect of Utilization on Additional Buffer Requirement 207 We first investigate the effect of switch utilization on the additional 208 buffer requirement for a given overflow probability. To carry the com- 209 parison, we analyze the VC-merging and non-VC merging case when the 210 average packet size is equal to 10 cells, using geometrically dis- 211 tributed packet sizes and packet interarrival times, with cells of a 212 packet arriving contiguously (later, we consider other distributions). 213 The results show, as expected, the VC-merging switch requires more 214 buffers than the non-VC merging switch. When the utilization is low, 215 there may be relatively many incomplete packets in the reassembly 216 buffers at any given time, thus wasting storage resource. For example, 217 when the utilization is 0.3, VC merging requires an additional storage 218 of about 45 cells to achieve the same overflow probability. However, as 219 the utilization increases to 0.9, the additional storage to achieve the 220 same overflow probability drops to about 30 cells. The reason is that 221 when traffic intensity increases, the VC-merging system becomes more 222 work-conserving. 224 It is important to note that ATM switches must be dimensioned at high 225 utilization value (in the range of 0.8-0.9) to withstand harsh traffic 226 conditions. At the utilization of 0.9, a VC-merge ATM switch requires a 227 buffer of size 976 cells to provide an overflow probability of 10^{-5}, 228 whereas an non-VC merge ATM switch requires a buffer of size 946. These 229 numbers translate the additional buffer requirement for VC merging to 230 about 3% - hardly an additional hardware cost. 232 3.2 Effect of Packet Size on Additional Buffer Requirement 234 We now vary the average packet size to see the impact on the buffer 235 requirement. We fix the utilization to 0.5 and use two different aver- 236 age packet sizes; that is, B=10 and B=30. To achieve the same overflow 237 probability, VC merging requires an additional buffer of about 40 cells 238 (or 4 packets) compared to non-VC merging when B=10. When B=30, the 239 additional buffer requirement is about 90 cells (or 3 packets). In 240 terms of the number of packets, the additional buffer requirement does 241 not increase as the average packet size increases. 243 3.3 Additional Buffer Overhead Due to Packet Reassembly 245 There may be some concern that VC merging may require too much buffering 246 when the number of reassembly buffers increases, which would happen if 247 the switch size is increased or if cells for packets going to different 248 destinations are allowed to interleave. We will show that the concern 249 is unfounded since buffer sharing becomes more efficient as the number 250 of reassembly buffers increases. 252 To demonstrate our argument, we consider the overflow probability for VC 253 merging for several values of reassembly buffers (N); i.e., N=4, 8, 16, 254 32, 64, and 128. The utilization is fixed to 0.8 for each case, and the 255 average packet size is chosen to be 10. For a given overflow probabil- 256 ity, the increase in buffer requirement becomes less pronounced as N 257 increases. Beyond a certain value (N=32), the increase in buffer 258 requirement becomes insignificant. The reason is that as N increases, 259 the traffic gets thinned and eventually approaches a limiting process. 261 3.4 Effect of Interarrival time Distribution on Additional Buffer 263 We now turn our attention to different traffic processes. First, we use 264 the same ON period distribution and change the OFF period distribution 265 from geometric to hypergeometric which has a larger Square Coefficient 266 of Variation (SCV), defined to be the ratio of the variance to the 267 square of the mean. Here we fix the utilization at 0.5. As expected, 268 the switch performance degrades as the SCV increases in both the VC- 269 merging and non-VC merging cases. To achieve a buffer overflow proba- 270 bility of 10^{-4}, the additional buffer required is about 40 cells when 271 SCV=1, 26 cells when SCV=1.5, and 24 cells when SCV=2.6. The result 272 shows that VC merging becomes more work-conserving as SCV increases. In 273 summary, as the interarrival time between packets becomes more bursty, 274 the additional buffer requirement for VC merging diminishes. 276 3.5 Effect of Internet Packets on Additional Buffer Requirement 278 Up to now, the packet size has been modeled as a geometric distribution 279 with a certain parameter. We now modify the packet size distribution to 280 a more realistic one. Since the initial deployment of VC-merge capable 281 ATM switches is likely to be in the core network, it is more realistic 282 to consider the packet size distribution in the Wide Area Network. To 283 this end, we refer to the data given in [6]. The data collected on Feb 284 10, 1996, in FIX-West network, is in the form of probability mass func- 285 tion versus packet size in bytes. Data collected at other dates closely 286 resemble this one. 288 The distribution appears bi-modal with two big masses at 40 bytes (about 289 a third) due to TCP acknowledgment packets, and 552 bytes (about 22 per- 290 cent) due to Maximum Transmission Unit (MTU) limitations in many 291 routers. Other prominent packet sizes include 72 bytes (about 4.1 per- 292 cent), 576 bytes (about 3.6 percent), 44 bytes (about 3 percent), 185 293 bytes (about 2.7 percent), and 1500 bytes (about 1.5 percent) due to 294 Ethernet MTU. The mean packet size is 257 bytes, and the variance is 295 84,287 bytes^2. Thus, the SCV for the Internet packet size is about 1.1. 297 To convert the IP packet size in bytes to ATM cells, we assume AAL 5 298 using null encapsulation where the additional overhead in AAL 5 is 8 299 bytes long [7]. Using the null encapsulation technique, the average 300 packet size is about 6.2 ATM cells. 302 We examine the buffer overflow probability against the buffer size using 303 the Internet packet size distribution. The OFF period is assumed to have 304 a geometric distribution. Again, we find that the same behavior as 305 before, except that the buffer requirement drops with Internet packets 306 due to smaller average packet size. 308 3.6 Effect of Correlated Interarrival Times on Additional Buffer 310 To model correlated interarrival times, we use the DAR(p) process (dis- 311 crete autoregressive process of order p) [8], which has been used to 312 accurately model video traffic (Star Wars movie) in [9]. The DAR(p) 313 process is a p-th order (lag-p) discrete-time Markov chain. The state of 314 the process at time n depends explicitly on the states at times (n-1), 315 ..., (n-p). 317 We examine the overflow probability for the case where the interarrival 318 time between packets is geometric and independent, and the case where 319 the interarrival time is geometric and correlated to the previous one 320 with coefficient of correlation equal to 0.9. The empirical distribution 321 of the Internet packet size from the last section is used. The utiliza- 322 tion is fixed to 0.5 in each case. Although, the overflow probability 323 increases as p increases, the additional amount of buffering actually 324 decreases for VC merging as p, or equivalently the correlation, 325 increases. One can easily conclude that higher-order correlation or 326 long-range dependence will result in similar qualitative performance. 328 3.7 Slow Sources 330 The discussions up to now have assumed that cells within a packet arrive 331 back-to-back. With slow sources, adjacent cells would typically be 332 spaced by idle slots. Adjacent cells within the same packet may also be 333 perturbed and spaced as these cells travel downstream due to the merging 334 and splitting of cells at preceding nodes. 336 Here, we assume that each source transmits at the rate of r (0 < r < 337 1), in units of link speed, to the ATM switch. To capture the merging 338 and splitting of cells as they travel in the network, we will also 339 assume that the cell interarrival time within a packet is randomly per- 340 turbed. To model this perturbation, we stretch the original ON period 341 by 1/r, and flip a Bernoulli coin with parameter r during the 342 stretched ON period. In other words, a slot would contain a cell with 343 probability r, and would be idle with probability 1-r during the ON 344 period. By doing so, the average packet size remains the same as r is 345 varied. We simulated slow sources on the VC-merge ATM switch using the 346 Internet packet size distribution with r=1 and r=0.2. The packet 347 interarrival time is assumed to be geometrically distributed. Reducing 348 the source rate in general reduces the stresses on the ATM switches 349 since the traffic becomes smoother. With VC merging, slow sources also 350 have the effect of increasing the reassembly time. At utilization of 351 0.5, the reassembly time is more dominant and causes the slow source 352 (with r=0.2) to require more buffering than the fast source (with 353 r=1). At utilization of 0.8, the smoother traffic is more dominant 354 and causes the slow source (with r=0.2) to require less buffering than 355 the fast source (with r=1). This result again has practical conse- 356 quences in ATM switch design where buffer dimensioning is performed at 357 reasonably high utilization. In this situation, slow sources only help. 359 3.8 Packet Delay 361 It is of interest to see the impact of cell reassembly on packet delay. 362 Here we consider the delay at one node only; end-to-end delays are sub- 363 ject of ongoing work. We define the delay of a packet as the time 364 between the arrival of the first cell of a packet at the switch and the 365 departure of the last cell of the same packet. We study the average 366 packet delay as a function of utilization for both VC-merging and non-VC 367 merging switches for the case r=1 (back-to-back cells in a packet). 368 Again, the Internet packet size distribution is used to adopt the more 369 realistic scenario. The interarrival time of packets is geometrically 370 distributed. Although the difference in the worst-case delay between 371 VC-merging and non-VC merging can be theoretically very large, we 372 observe that the difference in average delays of the two systems to be 373 consistently about one average packet time for a wide range of utiliza- 374 tion. The difference is due to the average time needed to reassemble a 375 packet. 377 To see the effect of cell spacing in a packet, we again simulate the 378 average packet delay for r=0.2. We observe that the difference in 379 average delays of VC merging and non-VC merging increases to a few 380 packet times (approximately 20 cells at high utilization). It should be 381 noted that when a VC-merge capable ATM switch reassembles packets, in 382 effect it performs the task that the receiver has to do otherwise. 383 >From practical point-of-view, an increase in 20 cells translates to 384 about 60 micro seconds at OC-3 link speed. This additional delay should 385 be insignificant for most applications. For delay-sensitive traffic, 386 the additional delay can be reduced by using smaller packets. 388 4. Security Considerations 390 Security considerations are not addressed in this document. 392 5. Conclusion 394 This document has investigated the impacts of VC merging on an ATM 395 switch performance. We experimented with various traffic processes to 396 understand the detailed behavior of VC-merge capable MPLS switches. Our 397 main finding indicates that VC merging incurs a minimal overhead com- 398 pared to non-VC merging in terms of additional buffering. Moreover, the 399 overhead decreases as utilization increases, or as the traffic becomes 400 more bursty. This fact has important practical consequences since 401 switches are dimensioned for high utilization and stressful traffic con- 402 ditions. We have considered the case where the output buffer uses a 403 FIFO scheduling. Future work will focus on fair queueing and variations. 404 Fair queueing essentially has the effect of spacing the incoming cells 405 and increasing the number of reassembly buffers. Our earlier results 406 indicate that these two factors do not have a significant impact on the 407 amount of buffering. However, additional delay due to fair queueing 408 requires further investigation. Network-wide performance implications 409 resulting from interconnecting many VC-merge capable ATM switches also 410 need further study. 412 6. Acknowledgment: 414 The authors thank Debasis Mitra for his penetrating questions during the 415 internal talks and discussions. 417 7. References 419 [1] P. Newman, Tom Lyon and G. Minshall, 420 ``Flow Labelled IP: Connectionless ATM Under IP,'' 421 in Proceedings of INFOCOM'96, San-Francisco, Apr. 1996. 423 [2] Y. Rekhter, B. Davie, D. Katz, E. Rosen and 424 G. Swallow, ``Cisco Systems' Tag Switching Architecture Overview,'' 425 RFC 2105, Feb. 1997. 427 [3] Y. Katsube, K. Nagami and H. Esaki, 428 ``Toshiba's Router Architecture Extensions for ATM: Overview,'' 429 RFC 2098, Feb. 1997. 431 [4] A. Viswanathan, N. Feldman, R. Boivie and R. Woundy, 432 ``ARIS: Aggregate Route-Based IP Switching,'' 433 Internet Draft , Mar. 1997. 435 [5] R. Callon, P. Doolan, N. Feldman, A. Fredette, 436 G. Swallow and A. Viswanathan, 437 ``A Framework for Multiprotocol Label Switching,'' 438 Internet Draft , May 1997. 440 [6] WAN Packet Size Distribution, 441 http://www.nlanr.net/NA/Learn/packetsizes.html. 443 [7] J. Heinanen, 444 ``Multiprotocol Encapsulation over ATM Adaptation Layer 5,'' 445 RFC 1483, Jul. 1993. 447 [8] P. Jacobs and P. Lewis, 448 ``Discrete Time Series Generated by Mixtures III: 449 Autoregressive Processes (DAR(p)),'' Technical Report NPS55-78-022, 450 Naval Postgraduate School, 1978. 452 [9] B.K. Ryu and A. Elwalid, 453 ``The Importance of Long-Range Dependence of VBR Video Traffic 454 in ATM Traffic Engineering,'' 455 ACM SigComm'96, Stanford, CA, pp. 3-14, Aug. 1996. 457 Authors' Address: 459 Indra Widjaja 460 Fujitsu Network Communications, Inc. 461 4403 Bland Road 462 Raleigh, NC 27609, USA 463 Phone: 919 790 2037 464 Email: i_widjaja@fujitsu-fnc.com 466 Anwar Elwalid 467 Bell Labs, Lucent Technologies 468 600 Mountain Ave. Rm 2C-124 469 Murray Hill, NJ 07974, USA 470 Phone: 908 582-7589 471 Email: anwar@lucent.com