idnits 2.17.1 draft-ibanez-diffserv-assured-eval-00.txt: -(178): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(370): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-23) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == There are 4 instances of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 4 longer pages, the longest (page 2) being 66 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 15 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 280 instances of too long lines in the document, the longest one being 7 characters in excess of 72. ** There are 85 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 316 has weird spacing: '... target rate ...' == Line 654 has weird spacing: '... if avg_rate ...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-01) exists of draft-ietf-diffserv-arch-00 ** Downref: Normative reference to an Informational draft: draft-ietf-diffserv-arch (ref. 'ARCH') == Outdated reference: A later version (-03) exists of draft-ietf-diffserv-header-02 -- Possible downref: Normative reference to a draft: ref. 'SVCALLOC' -- Possible downref: Non-RFC (?) normative reference: ref. 'EXPALLOC' == Outdated reference: A later version (-02) exists of draft-nichols-diff-svc-arch-00 ** Downref: Normative reference to an Informational draft: draft-nichols-diff-svc-arch (ref. '2BIT') -- Possible downref: Non-RFC (?) normative reference: ref. 'RED' Summary: 11 errors (**), 0 flaws (~~), 9 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force J. Ibanez 2 INTERNET DRAFT K. Nichols 3 Expires February, 1999 Bay Networks 4 August, 1998 6 Preliminary Simulation Evaluation of an Assured Service 7 9 Status of this Memo 11 This document is an Internet-Draft. Internet-Drafts are working 12 documents of the Internet Engineering Task Force (IETF), its 13 areas, and its working groups. Note that other groups may also 14 distribute working documents as Internet-Drafts. 16 Internet-Drafts are draft documents valid for a maximum of six 17 months and may be updated, replaced, or obsoleted by other 18 documents at any time. It is inappropriate to use Internet- 19 Drafts as reference material or to cite them other than as 20 "work in progress." 22 To view the entire list of current Internet-Drafts, please check 23 the "1id-abstracts.txt" listing contained in the Internet-Drafts 24 Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net 25 (Europe), munnari.oz.au (Pacific Rim), ds.internic.net (US East 26 Coast), or ftp.isi.edu (US West Coast). 28 Distribution of this document is unlimited. 30 This Internet Draft expires on February 1999. 32 Abstract 34 This draft presents a simulation analysis of Assured Service, an end-to-end 35 service based on the differentiated services enhancements for IP. Assured 36 Service has been the subject of much discussion in the past year in the IETF, 37 but solid information on its applicability to the entire Internet has been 38 lacking. This report is aimed at providing a first step in this direction. 39 Assured Service requires an active queue management algorithm with preferential 40 packet drop. The RIO algorithm (an extension of the RED algorithm) has been 41 the primary method suggested and is the one evaluated here. Our results show 42 that Assured Service does not provide clearly defined and consistent rate 43 guarantees; the advantage gained by connections using Assured Service is not 44 a quantifiable one. Further work would be required to determine an appropriate 45 use of the Assured Service. A pdf version of this document is available and 46 recommended for the figures it contains. 48 1. Introduction 50 The Assured Service (AS) as first defined in [SVCALLOC] is an example of an 51 end-to-end service that can be built from the proposed differentiated services 52 enhancements to IP [HEADER] using a single PHB. This type of service is 53 appealing in its apparent ease of deployment, but at the same time 54 insufficient measurement and analysis has been done to determine whether its 55 range of applicability and whether it scales to the entire Internet. This 56 draft is a first step at giving some clues towards the answer. 58 This report analyzes Assured Service through use of simulations performed with 59 ns-2 [ns], a network simulator developed by UC Berkeley, LBL, USC/ISI and 60 Xerox PARC. A drop preference queue management mechanism is the required PHB 61 to deploy AS. Though other queue management techniques based on drop preference 62 may be used, this document explores only the use of RIO, an extension of RED, 63 and the PHB originally proposed to implement Assured Service [SVCALLOC, 2BIT]. 65 The starting point for this work was Clark and Fang's "Explicit Allocation 66 of Best Effort Delivery Service" [EXPALLOC], which presents interesting 67 results, but in our opinion does not use appropriate traffic models and 68 scenarios. This report goes a step further toward realistic Internet 69 traffic patterns by mixing Assured traffic with best-effort traffic. In 70 this work, we have also restricted ourselves to long-lived TCP connections, 71 but we intend further work with traffic mixes that reflect the mix of 72 short-lived TCPs seen in the Internet today. We conclude that, Assured 73 Service cannot provide clearly defined and consistent guarantees to these 74 connections. Our results show two main problems. First is the interaction 75 of TCP connection dynamics with the differing treatment of INs and OUTs. 76 Secondly, the instantaneous or burst behavior in the RIO managed queues and 77 at points in the network where traffic merge can cause quite different 78 behavior than the average or expected behavior. We expect this latter 79 effect to cause more problems when we look at more realistic, bursty traffic. 81 2. Differentiated Services, Network Model and Assured Service 83 2.1 Differentiated services 85 Differentiated services is described in [HEADER] and [ARCH] and an earlier 86 version with discussion of AS in [2BIT]. In this draft, we have only 87 introduced Assured service into a network model. 89 2.2 The network model 91 Simulations are discussed in more detail in the Appendix. Unless stated 92 otherwise, simulations use the topology of figure 1. 50 point-to-point 93 connections share a bottleneck link of 50 Mbps (6.25 Mbytes/s). 10 Mbps links 94 connect each host to its respective router. Host hi is connected to host hi+50 95 and connections are identified by the sending host ID; for instance connection 96 0 is associated to host h0. The connections are "infinite" FTPs. Transfers are 97 unidirectional, and ACKs are never lost, which implies that the performance of 98 the communications will be better than in a real situation where ACKs can be 99 lost in addition to data. A profiler with a target rate of 1 Mbps (125 100 Kbytes/s) is attached to each AS capable host; an aggregate policer is 101 installed in the router just before the bottleneck link. RTTs are chosen 102 randomly in the range 50..150ms for each connection, and starting times are 103 randomly distributed within the first second of simulation time. Packet are 104 576 bytes. 106 10Mb, 10Mb, 107 (All 50 links) (All 50 links) 108 S h0_________ ______ h50 R 109 e h1_________\ /______ h51 e 110 n . \\ 50 Mbps // . c 111 d . r0 ------------------------ r1 . e 112 e h48_________// \ \_____ h98 i 113 r h49_________/ \______ h99 v 114 s e 115 r 116 DATA --> <--- ACKs s 118 Figure 1: 50 point-to-point connections topology 120 2.3 Assured Service 122 Assured Service was first introduced by Clark and Wroclawski in [SVCALLOC], 123 and is also considered in [2BIT]. The idea behind AS is to give the customer 124 the assurance of a minimum throughput, even during periods of congestion, 125 while allowing him to consume more bandwidth when the network load is low. 126 Thus a connection using the assured service should achieve a throughput equal 127 to the subscribed minimum rate, also called target rate, plus some share of 128 the remaining bandwidth gained by competing with all the active best-effort 129 connections. 131 The mechanism to achieve this goal is as follows. A profile describing the 132 target rate and possibly a burst rate is contracted by the user. The edge 133 device to which the host is connected, or the host itself if it is AS capable, 134 measures the rate of the connection and as long as this rate is below the 135 target rate, packets are marked as IN-profile. When the target rate is 136 exceeded, packets are marked as OUT-of-profile. If a router somewhere in the 137 path experiences congestion, it will preferentially drop packets marked as OUT. 138 One implementation of this is to assume that OUT packets are treated the same 139 as packets which are not profiled and thus are simply not marked at all. In 140 this case, we simply say that a packet is "marked" to mean it is "marked as 141 IN". 143 Though the goal of AS is to assure a minimum throughput to a connection while 144 enabling it to get even better performance when the network load permits. 145 However there is some concern about the fairness of the assured service 146 against best-effort service. In severe congestion where there is not enough 147 bandwidth to satisfy all the contracts, assured connections would compete 148 against each other for the available bandwidth, depriving best-effort 149 connections of any bandwidth. This situation is expected to be very unusual. 151 2.1.1 The RIO algorithm 153 The RIO algorithm allows two traffic classes within the same queue to be 154 treated differently by applying a drop preference to one of the classes. RIO 155 is an extension of RED [RED], "RED with In and Out", and is discussed in 156 [SVCALLOC] and [EXPALLOC]. RIO can be viewed as the combination of two RED 157 algorithms with different drop probability curves, chosen to give one group of 158 packets preference. We assume familiarity with RED at the level of [RED]. 159 For OUT packets, as long as the average queue size is below minth_out no 160 packets are dropped. If the average queue size exceeds this, arriving packets 161 are dropped with a probability that increases linearly from 0 to maxp_out. If 162 the average queue size exceeds maxth_out, all OUT packets are dropped. Note 163 that the average queue size is based on the total number of packets in the 164 queue, regardless of their marking. 166 For IN packets, the average queue size is based on the number of IN packets 167 present in the queue and the parameters are set differently in order to start 168 dropping OUTs well before any INs are discarded. However, when there are only 169 OUT (or best-effort) packets, RIO has to perform much like RED. Therefore we 170 have to set OUT parameters following almost the same rules as for RED. We 171 observed in simulation that IN and OUT parameters need not be very different; 172 the inherent discrimination produced by the average queue size calculation is 173 enough. 175 2.1.2 Choice of the RIO parameters 177 We applied the general rules for RED [RED] and set a queue weight of 0.002, 178 a minth_out of about 0.4�maxq_size, and maxth_out to about 0.8�maxq_size 179 and maxp_out of 0.05. For IN parameters we took a different approach from 180 that proposed in [EXPALLOC] or [SVCALLOC]. Those papers suggest that the 181 thresholds be chosen much lower for OUT packets than for INs, to achieve 182 enough differentiation. We found that by calculating the drop probability 183 for OUT packets based on the total average queue size, and for IN packets 184 based only on the average number of IN packets in the queue, the 185 discrimination is already significant. Furthermore, if we set IN parameters 186 more leniently, that may cause trouble if most of the packets arriving are 187 marked IN. Choosing the thresholds for INs close to those of OUTs maintains 188 consistent behavior with any proportion of marked traffic. We set minth_in 189 to about 0.45�maxq_size and maxth_in to the same value as maxth_out, 190 0.8�maxq_size. For maxp_in we used 0.02. 192 The notation used throughout this report to represent the RIO parameters is: 194 minth_out /maxth_out/maxp_out minth_in/maxth_in /maxp_in 196 Following this notation and rounding the values, the simulation parameters 197 we used are: 199 420/840/0.05 for OUT parameters, and 500/840/0.02 for IN parameters 200 2.1.3 Policing mechanism: average rate estimator or token bucket? 202 An interesting point when implementing the architecture is the choice of 203 mechanism to check conformity to the contracted profile. This policing can be 204 performed by the marker in the edge device or by the policer at a boundary 205 between two domains. 207 We experimented with two different mechanisms: an average rate estimator, as 208 presented in [EXPALLOC], and a token bucket. When the average rate estimator 209 measures a rate that exceeds the contracted target rate for a given flow, the 210 policer marks OUTs with a linearly increasing probability. This was designed 211 for TCP connections, but for other applications the average rate estimator may 212 allow more IN packets to enter the network than what has been contracted. 213 We used our simulation model to to explore relative performance of the two 214 mechanisms. We simulated a mix of best-effort and AS connections out of a 215 total of 50 connections. In our simulations, AS connections achieved better 216 performance when using token buckets and the discrimination between AS and 217 best-effort connections is much better as seen from the lack of overlap 218 between the two classes. The superiority of the token bucket is due to its 219 permitting transmission of a deterministic burst of IN packets, whereas an 220 average rate estimator probabilistically marks some packets beyond the target 221 rate as IN and some as OUT. A correctly configured token bucket will therefore 222 allow for the natural burstiness of TCP by marking as IN all the packets 223 within a burst, while with an average rate estimator some packets will be 224 marked OUT giving them drop preference. 226 In addition, the probabilistic marking is problematic for metering a CBR 227 source. A profile meter using an average rate estimator would allow the source 228 to transmit at a sustained rate higher than the contracted one. Token buckets 229 do not permit this. Thus we used token bucket in our simulations. We found 230 that a token bucket depth of 20 Kbytes at the profilers and 25 Kbytes at the 231 policer to keep the remarking rate low (under 3%). 233 3. Results for Assured Service 235 3.1 Influence of the round-trip time 237 TCP performance is well-known to be sensitive to a connection's round-trip 238 time (RTT). The larger the RTT, the more time needed to recover after a packet 239 loss. It is therefore interesting to see if this effect is lessened by the use 240 of AS. 242 Simulations were run 5 times with 10, 25 and 40 AS connections out of 50, the 243 others being standard best-effort connections. Every AS capable host has a 244 target rate of 1 Mbps (125 Kbytes/s), therefore the portion of the bottleneck 245 link bandwidth allocated to assured traffic amounts to 20% in the case of 10 246 AS connections, 50% with 25 of them, and 80% with 40 of them. Results are 247 shown in figures 2, 3, and 4. Each point represents the measured rate for one 248 connection, averaged over 5 runs. The trend lines correspond to an exponential 249 regression and are included to give a rough idea of the way achieved rate 250 varies with the RTT. 252 The dependency of achieved rate on the RTT is also noticeable for AS 253 connections. However, by comparing the spread of the measures in figure 2 and 254 figure 4, we notice that when the AS connections are in large number, they 255 show less deviation (with respect to RTT) than the best-effort connections do 256 in the reverse situation. There is a more critical observation: notice that 257 some connections do not achieve their target rate, while others exceed the 258 target rate. (With average rate estimators, we had some best-effort 259 connections getting more bandwidth than some AS ones.) 260 (Figures 2-4 in pdf version.) 262 Figure 5 shows the difference between the case where there are only 263 best-effort connections and the one where there are only AS connections, the 264 two scenarios being plotted on the same graph.(Figure 5 in pdf version) 266 In the all AS connections case there is less RTT unfairness than in the all 267 best-effort connections case. To understand this phenomenon we need to look at 268 the RIO queue size, which is plotted in figure 6. What happens is that since 269 a connection with a small RTT gets more bandwidth by opportunistically 270 exceeding the target rate and sending OUT packets, many of those packets will 271 be dropped, causing the connection to decrease its sending rate. The more OUT 272 packets a source sends, the higher the probability that one or more of its 273 packets will be dropped within a certain time interval. That has the effect 274 of mitigating the gain of connections with a smaller RTT. Figure 6 275 demonstrates how in the best-effort only case the average queue size 276 oscillates substantially, leaving room for small RTT connections to 277 opportunistically send more packets.(Figure 6 in pdf version) 279 Note, however, that this example is artificial since all the available 280 bandwidth is allocated to assured service, and consequently only a few 281 connections reach their target rate. Having said that, the goal of this 282 example was merely to illustrate this point by way of an extreme situation. 283 If we take a close look at figure 2 and compare it with figure 4, we observe 284 that having a significant amount of assured traffic not only lowers the 285 dependency on the RTT for AS connections, but also for best-effort connections. 286 The cause is the same as above because connections with a small RTT send more 287 packets than those with a large RTT, thus having more chances to undergo a 288 packet drop in a certain time interval. 290 3.2 Performance against target rates 292 In this section we explore how well AS TCP connections acheive their target 293 rates. consistent manner. We simulate 40 connections, 20 of which are AS and 294 the remaining 20 are best-effort. An RTT of 100ms was used for all the 295 connections. The bottleneck link bandwidth is 20 Mbps (2.5 Kbytes/s) and we 296 use RIO parameters 347/694/0.05 390/694/0.02. The distribution of target rates 297 among AS capable hosts is as follows: 299 Host ID Target rate 300 0-3 2.0 Mbps 301 4-7 1.0 Mbps 302 8-11 0.5 Mbps 303 12-15 0.2 Mbps 304 16-19 0.1 Mbps 306 76% of the total bandwidth is allocated to the AS connections. Other scenarios 307 with different numbers of connections, with and without best-effort 308 connections, were tried and yielded comparable results. Simulation results 309 were averaged over 10 runs. Table 1 lists the results for each AS connection 310 and two of the best-effort connections (others are comparable thus omitted). 311 If the excess bandwidth is allocated equally among the 40 connections, each 312 would get an additional 2.5% of the excess bandwidth, or 120 Kbps. 314 Table 1: Performance against target rate and target rate plus equal share 316 ID Measured rate(Mbps) target rate target rate plus 2.5% of excess 317 0 1.32 2.0 2.12 318 1 1.27 2.0 2.12 319 2 1.32 2.0 2.12 320 3 1.31 2.0 2.12 321 4 0.93 1.0 1.12 322 5 0.94 1.0 1.12 323 6 0.95 1.0 1.12 324 7 0.95 1.0 1.12 325 8 0.60 0.5 0.62 326 9 0.58 0.5 0.62 327 10 0.58 0.5 0.62 328 11 0.58 0.5 0.62 329 12 0.38 0.2 0.32 330 13 0.35 0.2 0.32 331 14 0.39 0.2 0.32 332 15 0.36 0.2 0.32 333 16 0.29 0.1 0.22 334 17 0.30 0.1 0.22 335 18 0.30 0.1 0.22 336 19 0.30 0.1 0.22 337 20 0.25 0 0.12 338 21 0.21 0 0.12 339 In figure 7 the average measured rate is plotted against the target rate 340 (columns 2 and 3 of table 1). The curve labeled "ideal" represents the case 341 where each source achieves its target rate plus an equal share (1/40 or 0.025) 342 of the excess bandwidth, or the last column of table 1. (Figure 7 in pdf 343 version.) 345 Only the connections with small target-rates reach or exceed it. Connection 4 346 was assigned half the target rate of connection 0, but actually gets 70% of it. 347 The only visible discrimination is that as the target rate increases, the 348 achieved rate increases as well, but not proportionally. The explanation is 349 the variation of the congestion window. After the window has closed due to 350 packet losses, the connections with small-target rates return to their former 351 window size quicker than those with bigger target-rates, thus starting sooner 352 to compete for the excess bandwidth. Small target-rate connections make the 353 most of the time during which the large target-rate connections are increasing 354 their window to capture excess bandwidth. This is comparable to the 355 opportunism of small RTT connections at the expense of those with large RTT. 356 Figures 8 and 9 show the measured rate of connections 0 and 18, which have a 357 target rate of 2.0 Mbps (250 Kbytes/s) and 100 Kbps (12.5 Kbytes/s) 358 respectively. (Figures 8 and 9 in pdf version) 360 3.3 Effect of a non-responsive source 362 In this section we add a non-responsive source. Many applications that are 363 candidates for quality of service are real-time applications that do not 364 require a reliable transport protocol. They run over UDP and are 365 non-responsive to congestion indication through loss. 367 Our non-responsive source is a constant bit rate(CBR) source. We simulated 20 368 point-to-point connections and a bottleneck link of 20 Mbps (2.5 Kbytes/s) and 369 RIO parameters 173/347/0.05 195/347/0.02. The RTT is about 100 ms for every 370 connection. There are 10 AS connections with a target rate of 1 Mbps (125�103 371 bytes/s) each and 10 best-effort connections including the one associated to 372 the CBR source. Simulations were run 5 times with several values for the CBR 373 output rate. Figure 10 shows the results averaged over five runs. There is one 374 point per connection for each value of the CBR output rate. Figure 11 shows 375 the same results with enhanced resolution around the operating region of the 376 TCP connections. (Figures 10 and 11 in pdf version) 377 As the CBR source increases its sending rate, all the TCP connections get 378 degraded, with the best effort connections being pushed toward starvation. Of 379 course the CBR source experiences increasing loss, but it still captures a lot 380 of bandwidth. The bandwidth captured by the CBR source has a limit which 381 corresponds to the situation where any OUT packet issued by any AS-enabled TCP 382 connection gets dropped because the maximum threshold for OUT packets of the 383 RIO algorithm is constantly reached. In this situation, AS-enabled TCP 384 connections alternate slow-start phases and congestion avoidance phases, never 385 going through a fast-recovery phase. As a result, almost all the packets 386 leaving the hosts are marked IN and will make their way through the congested 387 link, preventing the CBR source from grabbing more bandwidth. 388 Next we make the non-responsive source AS-capable. We add a to the output of 389 the CBR source and set its target rate at 4 Mbps (500 Kbytes/sec) for one run 390 and 8 Mbps(1 Mbytes/sec) for another. That leads to a total subscription of 391 70% or 90% of the bottleneck link bandwidth respectively. The results in terms 392 of achieved rate are almost identical to those shown in figures and . Thus a 393 non-responsive source produces the same impact on other connections, whether 394 it is AS capable or not. The actual difference lies in the number of packets 395 sent by the CBR source which later get dropped. This is represented in figure 396 12 by the packet drop rate. (Figure 12 in pdf version) 398 Virtually no packets get dropped when the CBR source transmits at its target 399 rate. When the CBR source has a target rate of 1 Mbyte/sec, 90% of the 400 bottleneck bandwidth is allocated to IN-marked packets. This leads to early 401 dropping a non-negligible number of IN packets, showing that the min_threshold 402 of RIO for IN packets is exceeded. When the CBR target rate is equal to 500 403 Kbytes/s and 70% of the bottleneck bandwidth is allocated to the assured 404 service, no IN packet gets early dropped. 405 As long as a CBR source sends packets at a rate close to the contracted one 406 and the allocation of IN-marked packets leaves sufficient headroom on 407 bottleneck links, the packet loss rate will be very low, if not null. On the 408 other hand, a CBR (non-responsive) source, whether AS enabled or not, will 409 grab most of the bandwidth it needs at the expense of TCP connections. 411 3.4 Effect of AS on best-effort traffic 413 In this section we explore how IN-marked traffic interacts with best-effort 414 traffic. In the previous sections we saw how as the number of AS connections 415 increases, best-effort connections' rates are decreased. This is reasonable 416 and acceptable since for AS connections to get their subscribed bandwidth, 417 some other connections must lose bandwidth. Our concern is whether AS and 418 best-effort connections compete for the excess bandwidth on an equal footing. 419 Each AS connection has a target rate of 1 Mbps (125 Kbytes/s). We did five 420 simulation runs and made histograms of the percentage of connections whose 421 average rate over the run fell into a particular bin. AS connections and 422 best-effort connections are recorded separately. In figure 13, we see that the 423 competition is not fair and that best-effort traffic has the advantage. There 424 are 25 AS connections with a target rate of 125 Kbytes/s, 25 best-effort 425 connections, and a bottleneck link bandwidth of 6.25 Mbytes/s. There is 2.5 426 Mbytes/s of excess bandwidth. If shared equally among the 50 connections, each 427 should get 50 Kbytes/s. Thus, best-effort connections should get 50 Kbytes/s 428 and AS connections should get this amount added to their target rates, or 175 429 Kbytes/s. The results show that the AS connections average 155 Kbytes/s and 430 the best-effort connections average 75 Kbytes/s, more than an equal share of 431 the excess.(Figure 13 in pdf version) 433 An AS connection sending OUT packets will experience some drops, just like any 434 best-effort connection, resulting in its sending window closing and the 435 corresponding decrease of the sending rate. Meanwhile, other connections 436 opportunistically increase their rate. Since AS connections send at a higher 437 rate than best-effort connections do, their window size is larger and 438 therefore, once it has closed, requires more time to get to the original size. 439 During this time, best-effort connections can open their windows. In other 440 words, a drop causes the window to close not only with respect to the excess 441 bandwidth, but also with respect to the assured bandwidth, as there is only 442 one single window handling IN and OUT packets as a whole. 444 3.5 Effects of traffic bursts on AS 446 The simulations thus far have used a very simple topology. This has been 447 useful, but results from such simulations are, of necessity, optimistic. The 448 Internet is much more complex, composed of many networks merging together. At 449 each merge point traffic gets aggregated, and bursts can accumulate throughout 450 the network. In this section, we discuss results from a more complex topology, 451 shown in figure 14. It is still relatively simple, but allows us to look at 452 more performance issues. 454 Figure 14: A merging topology 456 h0________ ____h8 457 \ | 458 r0________ | 459 h1________/ 3ms \ |____h9 460 \ | 461 h2________ r4__________ |____h10 462 \ / 10Mbps,3ms\ | R 463 r1________/ \ | E 464 S h3________/ \ |____h11 C 465 E \ 1.5Mbps | E 466 N h4________ r6--------r7----| I 467 D \ / 4 ms |____h12 V 468 E r2________ / | E 469 R h5________/ 3ms \ / |____h13 R 470 S \ / | S 471 h6________ r5_________/ | 472 \ /10Mbps,3ms |____h14 473 r3________/ | 474 h7________/ | 475 100 Mbps |____h15 476 100 Mbps 478 All hosts are AS capable with a target rate of 100 kbps (12.5 Kbytes/s). 479 Aggregate IN-marked traffic is policed at each node. Policers have the 480 following target rates: 200 kbps for nodes r0 to r3, 400 kbps for nodes r4 481 and r5, and 800 kbps for r6. Token buckets are used for metering, and are 4 482 packets deep for the profilers, and 6 packets deep for the policers. Packet 483 size is 1500 bytes (including headers). The topology represents a 484 decreasing bandwidth hierarchy ending in a T1 (1.5 Mbps) bottleneck link 485 between r6 and r7. We used a maximum queue size of 24 packets for the T1 486 link and RIO parameters 4/18/0.05 5/18/0.02. Results are shown in table 2. 488 Table 2: Results for the merging topology (7.5% of IN packets demoted) 490 ID Measured rate (Kbps) Overflow IN drops 491 0 190 3 492 1 168 1 493 2 168 2 494 3 170 0 495 4 197 2 496 5 151 0 497 6 221 0 498 7 160 1 499 There are no early drops of IN packets, but some INs are dropped due to 500 queue overflow, and the queue overflows due to bursts of OUT packets. Thus 501 marked traffic is being dropped even when it conforms to its profile. It's 502 possible that "push-out" implementations of drop preference would be better 503 suited for AS than RIO since there would be no overflow drops of INs when 504 there are OUTs in the queue. Still, push-out queues are probably more 505 complex to implement and wouldn't solve the problem of IN packets arriving 506 to a queue full of IN packet bursts (further study is required to determine 507 how prevalent this case might be). In addition to the dropped INs, about 508 7.5% of IN packets are demoted, becoming candidates for preferential drop 509 as OUTs even though they were sent within their contracted profile. 511 The results presented here are optimistic for two main reasons: first, the 512 merging is quite limited and second, we did not employ bursty traffic models 513 like HTTP for either best-effort or IN-marked traffic. Typical Internet paths 514 have on the order of 18 hops and thus would be more complex than the scenario 515 simulated here. The majority of connections and packets in the Internet today 516 are HTTP. However, this scenario reveals some issues related to traffic 517 merging and burstiness accumulation that need further investigation. 519 3.6 Discussion of results 521 The big question about AS is the meaning of the "assurance". Is it sufficient 522 to tell a subscriber that a flow using AS will tend to get "better effort" 523 than a best-effort flow? Our results show that it is unlikely that the 524 subscription rate can be viewed as a hard contract. Depending on other traffic 525 and specific topologies, a subscriber may get more or less than the 526 subscribed rate. In addition, it appears that the merging structure of the 527 Internet leads to remarking of IN packets and to their dropping due to queue 528 overflows which may be caused by OUT packets. Further, it seems that 529 compliance may be difficult to check in any case other than a long-lived 530 connection. Although AS may give a "better effort" for web browsing, it will 531 be difficult to quantify this. 533 If we look at AS in a more general way, it appears it can assure that any 534 packet marked as IN will reach its destination with a higher probability than 535 an unmarked packet as long as all the networks involved in the transmission 536 are adequately provisioned. 538 4. Summary 540 In this draft, Assured service was evaluated based on simulations. The results 541 give insight into the workings of the assured service, thus serving as a base 542 for anyone interested in this architecture. Nevertheless the topologies used 543 are quite simple and only long-lived connections were simulated. Since the 544 Internet has a complex topology and a large portion of the traffic consists of 545 short-lived HTTP connections, these simulations are optimistic in that they 546 reflect only the simplest dynamics. In particular, more work needs to be done 547 with traffic merging and burstiness accumulation. 549 Our main conclusion is AS cannot offer a quantifiable service to TCP 550 traffic. In particular, AS does not allow a strict allocation of bandwidth 551 between several users. When IN packets and OUT packets are mixed in a 552 single TCP connection, drops of OUT packets negatively impact the 553 connection's performance. The interaction of TCP dynamics with priority 554 dropping might make TCP a poor choice of application for the AS. Another 555 important conclusion is that, due to the use of a single queue for IN and 556 OUT packets, there is a strong dependency on each other. As a result some 557 IN packets can be dropped in consequence of OUT traffic overflowing the 558 queue. In the real Internet, where the burstiness of best-effort traffic 559 can be significant, this point is very critical and should not be disregarded. 560 Assured service appears to have potential as a "better effort" service but 561 its characteristics should be further studied before deployment. 563 5. Security Considerations 565 There are no security considerations of this draft. 567 6. References 569 [ns] UCB, LBNL, VINT Network Simulator - ns 570 http://www-mash.cs.berkeley.edu/ns/ns.html [nsdoc] UC Berkeley, LBL, USC/ISI 571 and Xerox PARC, Ns Notes and Documentation 572 http://www-mash.cs.berkeley.edu/ns/nsDoc.ps.gz 574 [ARCH] D. Black, S. Blake, M. Carlson, E. Davies, Z. Wang, and W. Weiss, "An 575 Architecture for Differentiated Services", Internet Draft 576 draft-ietf-diffserv-arch-00.txt, May 1998. 578 [HEADER] K. Nichols, S. Blake, "Definition of the Differentiated Services 579 Field (DS Byte) in the IPv4 and IPv6 Headers", Internet Draft 580 draft-ietf-diffserv-header-02.txt, August 1998. 582 [SVCALLOC] D. Clark and J. Wroclawski, "An Approach to Service Allocation in 583 the Internet", Internet Draft draft-clark-diff-svc-alloc-00.txt, July 1997. 585 [EXPALLOC] D. Clark and W. Fang, "Explicit Allocation of Best Effort Delivery 586 Service" http://diffserv.lcs.mit.edu/Papers/exp-alloc-ddc-wf.ps 588 [2BIT] K. Nichols, V. Jacobson, and L. Zhang, "A Two-bit Differentiated 589 Services Architecture for the Internet", Internet Draft 590 draft-nichols-diff-svc-arch-00.txt, November 1997, 591 ftp://ftp.ee.lbl.gov/papers/dsarch.pdf 593 [RED] S. Floyd and V. Jacobson, "Random Early Detection Gateways for 594 Congestion Avoidance", IEEE/ACM Transactions on Networking, August 1993. 596 7. Authors' addresses 598 Juan-Antonio Ibanez 599 Bay Networks 600 4401 Great America Parkway, SC01-04 601 Santa Clara, CA 95052-8185 602 ibanez@eurecom.fr 604 Kathleen Nichols 605 Bay Networks 606 knichols@baynetworks.com 608 Appendix: Implementation of differentiated services in ns-2 610 Simulations have been performed with ns-2 (version ns2.1b1). Some 611 modifications or additions were required to implement the differentiated 612 services. New modules are represented hereafter with the inheritance tree and 613 are identified by gray boxes: (Figure in pdf version) 615 The core of these modules has been implemented in C++, and some OTcl code was 616 required to implement their interface with the front-end interpreter. 617 TclObject and NsObject are the two base classes from which any object is 618 derived; they provide all the basic functions allowing objects to interact 619 with each other and with the front-end interpreter, such as mirroring, 620 variable tracing, etc. 622 Besides these new classes I have also made some minor modifications to 623 standard modules in order to accommodate some specific requirements. The class 624 TB implements a token bucket which is used by the Profiler. 625 The profiler, also called a profile meter, is attached directly to a packet 626 source and measures the rate at which it is sending data, either by mean of an 627 average rate estimator, or a token bucket. If the measured rate is below the 628 target rate, it marks packets IN, but when the target rate is exceeded they 629 are left unmarked. (Figure in pdf version) 631 A profiler is usually attached to a single source of traffic, but it can also 632 process an aggregate, at the egress of a domain. For this reason the 633 profiler's interface is written in such way that it can be attached either to 634 an agent, or to a link. The second option allows the policing of an aggregate 635 by attaching the profiler to the link exiting the domain. Two different meters 636 were implemented for the profiler: an average rate estimator, and a token 637 bucket. In [EXPALLOC] only the average rate estimator is used. (Figures are 638 shown in pdf version) 639 The average rate estimator is the time sliding window algorithm described in 640 [EXPALLOC]. 642 Initialization: 643 win_length = a constant 644 avg_rate = 0 645 last_arrival = 0 646 Each time a packet is received: 648 last_arrival = now 650 The window length determines the worth of past history the algorithm remembers, 651 or in other words the weight of the past against the present. After the rate 652 estimation the following tagging algorithm is executed. 654 if avg_rate <= target_rate 655 set DS field to IN 656 else 657 with probability Pout = (avg_rate - target_rate) / target_rate 658 set DS field to OUT 659 else 660 set DS field to IN 662 This probabilistic marking allows for the well-known oscillations of a TCP 663 connection in pursuit of its maximum rate. As a matter of fact, for a TCP 664 connection to achieve a given average rate, the connection should be allowed 665 to oscillate around that value. The drawback is that this mechanism will 666 permit other types of connections, like a CBR-like connection, to get in 667 average more than the target rate. 669 The policer is attached to an intermediate node representing a router and 670 monitors the aggregate of traffic which enters (or leaves) the node. If the 671 rate of, the aggregate is below its target rate, packets are forwarded 672 unchanged, but if the target rate is exceeded, the packets arriving with 673 marked IN get remarked (demoted) to OUT before being forwarded.