idnits 2.17.1 draft-ietf-intserv-control-del-svc-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-26) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** The abstract seems to contain references ([2], [3], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 181 has weird spacing: '...cifying the t...' == Line 378 has weird spacing: '...largest maxim...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (14 November 1995) is 10391 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' Summary: 9 errors (**), 0 flaws (~~), 3 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force Integrated Services WG 2 INTERNET-DRAFT Shenker/Partridge/Wroclawski 3 draft-ietf-intserv-control-del-svc-02.txt Xerox/BBN/MIT 4 14 November 1995 5 Expires: ?/?/96 7 Specification of Controlled Delay Quality of Service 9 Status of this Memo 11 This document is an Internet-Draft. Internet-Drafts are working 12 documents of the Internet Engineering Task Force (IETF), its areas, 13 and its working groups. Note that other groups may also distribute 14 working documents as Internet-Drafts. 16 Internet-Drafts are draft documents valid for a maximum of six months 17 and may be updated, replaced, or obsoleted by other documents at any 18 time. It is inappropriate to use Internet-Drafts as reference 19 material or to cite them other than as ``work in progress.'' 21 To learn the current status of any Internet-Draft, please check the 22 ``1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow 23 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 24 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or 25 ftp.isi.edu (US West Coast). 27 This document is a product of the Integrated Services working group 28 of the Internet Engineering Task Force. Comments are solicited and 29 should be addressed to the working group's mailing list at int- 30 serv@isi.edu and/or the author(s). 32 This draft reflects changes from the IETF meeting in Stockholm. 34 Abstract 36 This memo describes the network element behavior required to 37 deliver Controlled Delay service in the Internet. Controlled 38 delay service provides three levels of delay control; network 39 elements, when overloaded, are required to control delay by 40 denying service requests. However, there are no quantitative 41 assurances about the absolute level of delay provided. The 42 controlled delay service is designed for service-adaptive and 43 delay-adaptive applications; i.e., applications that are prepared 44 to dynamically adapt to changing packet transmission delays and to 45 dynamically change the level of packet delivery delay control they 46 request from the network when their current level of service is 47 not adequate. The controlled delay service imposes relatively 48 minimal requirements on network components that implement it, and 49 is intended to be usable in situations ranging from small 50 centrally managed private IP networks to the global Internet. 51 This specification follows the service specification template 52 described in [1]. 54 Introduction 56 This document defines the requirements for network elements that 57 support Controlled Delay service. This memo is one of a series of 58 documents that specify the network element behavior required to 59 support various qualities of service in IP internetworks. Services 60 described in these documents are useful both in the global Internet 61 and private IP networks. 63 This document is based on the service specification template given in 64 [1]. Please refer to that document for definitions and additional 65 information about the specification of qualities of service within 66 the IP protocol family. 68 End-to-End Behavior 70 The end-to-end behavior provided by a series of network elements that 71 conform to this document provides three levels of delay control. 72 This service ensures that the levels of experienced delays and losses 73 will be controlled, in that additional service requests will be 74 turned away when the element is overloaded. In particular, the 75 bandwidth available to the flow will be, on average, at least as 76 great as specified in its service request. Criteria for determining 77 when a resource is overloaded are not specified in this definition, 78 but are left to the individual vendor. This service makes no 79 assurances about the absolute levels of delay or jitter the receiving 80 application will experience. However, all three levels of controlled 81 delay service will have average delays that are no worse than best 82 effort service, and the maximal delays should be significantly better 83 than best effort service when there is significant load on the 84 network. Packet losses are rare as long as the offered traffic 85 conforms to the specified traffic characterization (see Invocation 86 Information). 88 This service is subject to admission control. 90 Motivation 92 Controlled delay service is designed for service-adaptive and delay- 93 adaptive applications. These applications are sensitive to packet 94 delivery delay, but are prepared to adapt to dynamically changing 95 delays by varying their playback point. In addition, they may be 96 prepared to change their requested level of service at any time if 97 the current level of service received from the network is not 98 adequate. This flexibility allows such applications to operate 99 successfully and efficiently over a wide range of network conditions. 101 Many applications that transmit interactive data, such as audio and 102 video conferencing sessions, are well suited to operation with the 103 controlled delay service. Applications that desire proven guarantees 104 on packet delivery time, such as real-time control and servoing 105 systems or playback applications that are intolerant of late-arriving 106 packets, are generally not in this category. 108 The end-to-end behavior obtained with controlled delay service 109 provides a middle ground between the employment of adaptive 110 applications in a pure best-effort network and the employment of a 111 network that rigidly controls delay. Strengths of this middle ground 112 are that applications can obtain some load control and delivery 113 preference for their packets while still benefiting from their 114 adaptive behavior; that the service can be usefully deployed in 115 large, unstructured internetworks; and that the specification is 116 amenable to highly efficient implementation and use of network 117 resources. 119 Associated with this service are characterization parameters which 120 describe the current delays experienced in the three services levels. 121 If the characterizations are provided to the endpoints, these will 122 provide some hint about the likely end-to-end delays that might 123 result from requesting a particular level of service. This is 124 intended to aid applications in choosing the appropriate service 125 level. However, this service is still quite usable without these 126 characterizations. 128 Network Element Data Handling Requirements 130 The network element must ensure that the packet loss and delays are 131 controlled. This must be accomplished through active admission 132 control. In particular, overprovisioning is not sufficient to 133 deliver controlled delay service; the element must be able to turn 134 flows away if accepting them would cause the element to have 135 excessive queueing delays. However, no quantitative specification of 136 average, statistical, or maximal delays is required. 138 There are three different logical levels of service. A network 139 element may internally implement fewer (or more) actual levels of 140 service, but must map them into three logical levels at the 141 controlled delay service invocation interface. The levels have 142 different degrees of delay control, with level 1 having the most 143 tightly controlled delay, and level 3 having the least tightly 144 controlled delay. The different levels do not have to give strictly 145 ordered delays for each packet; that is, the network element need not 146 ensure that every packet given level 1 service experiences less delay 147 than if it were given level 2 service. The element need only ensure 148 that the typical delays are no greater in level 1 than in level 2 149 (and similarly for levels 2 and 3). 151 All three levels of service should be given better service, i.e. more 152 tightly controlled delay, than uncontrolled best effort traffic. The 153 average delays experienced by packets receiving different levels of 154 controlled delay service and best-effort service may not differ 155 significantly. However, the tails of the delay distributions, i.e., 156 the maximum packet delays seen, for the levels of controlled delay 157 service that are implemented and for best-effort service should be 158 significantly different when the network has substantial load. 160 The controlled delay service must maintain a very low level of packet 161 loss. Although packet losses may occur, any substantial loss 162 represents a "failure" of the admission control algorithm. However, 163 vendors may employ admission control algorithms with different levels 164 of conservativeness, resulting in very different levels of loss 165 (varying, for instance, from 1 in 10^4 to 1 in 10^8). 167 The controlled delay service definition does not require any control 168 of short-term packet jitter (variation in network element transit 169 delay between different packets in the flow) beyond the control 170 already exercised on delay. Network element implementors who find it 171 advantageous to do so may use resource scheduling algorithms that 172 exercise some jitter control. 174 Links are not permitted to fragment packets as part of controlled 175 delay service. Packets larger than the MTU of the link must be 176 policed as nonconformant which means that they will be policed 177 according to the rules described in the Policing section below. 179 Invocation Information 181 The controlled delay service is invoked by specifying the traffic 182 (TSpec) and the desired service (RSpec) to the network element. A 183 service request for an existing flow that has a new TSpec and/or 184 RSpec should be treated as a new invocation, in the sense that 185 admission control must be reapplied to the flow. Flows that reduce 186 their TSpec and/or their RSpec (i.e., their new TSpec/RSpec is 187 strictly smaller than the old TSpec/RSpec according to the ordering 188 rules described in the section on Ordering below) should never be 189 denied service. 191 The TSpec takes the form of a token bucket plus a minimum policed 192 unit (m) and a maximum packet size (M). 194 The token bucket has a bucket depth, b, and a bucket rate, r. Both b 195 and r must be positive. The rate, r, is measured in bytes of IP 196 datagrams per second, and can range from 1 byte per second to as 197 large as 40 terabytes per second (or about what is believed to be the 198 maximum theoretical bandwidth of a single strand of fiber). Clearly, 199 particularly for large bandwidths, only the first few digits are 200 significant and so the use of floating point representations, 201 accurate to at least 0.1% is encouraged. 203 The bucket depth, b, is also measured in bytes and can range from 1 204 byte to 250 gigabytes. Again, floating point representations 205 accurate to at least 0.1% are encouraged. 207 The range of values is intentionally large to allow for the future 208 bandwidths. The range is not intended to imply that a network 209 element must support the entire range. 211 The minimum policed unit, m, is an integer measured in bytes. All 212 IP datagrams less than size m will be counted against the token 213 bucket as being of size m. The maximum packet size, M, is the biggest 214 packet that will conform to the traffic specification; it is also 215 measured in bytes. The flow must be rejected if the requested 216 maximum packet size is larger than the MTU of the link. Both m and 217 M must be positive, and m must be less then or equal to M. 219 The RSpec is a service level. The service level is specified by one 220 of the integers 1, 2, or 3. Implementations should internally choose 221 representations that leave a range of at least 256 service levels 222 undefined, for possible extension in the future. 224 The TSpec can be represented by two floating point numbers in 225 single-precision IEEE floating point format followed by two 32-bit 226 integers in network byte order. The first value is the rate (r), the 227 second value is the bucket size (b), the third is the minimum policed 228 unit (m), and the fourth is the maximum packet size (M). 230 The RSpec may be represented as an unsigned 16-bit integer carried in 231 network byte order. 233 For all IEEE floating point values, the sign bit must be zero. (All 234 values must be positive). Exponents less than 127 (i.e., 0) are 235 prohibited. Exponents greater than 162 (i.e., positive 35) are 236 discouraged. 238 Exported Information 240 Each controlled delay service module exports at least the following 241 information. All of the parameters described below are 242 characterization parameters. 244 For each level of service, the network element exports three 245 measurements of delay (thus making nine quantities in total). Each 246 of these characterization parameters is based on the maximal packet 247 transit delay experienced over some set of previous time intervals of 248 length T; these delays do not include discarded packets. The three 249 time intervals T are 1 second, 60 seconds, and 3600 seconds. The 250 exported parameters are averages over some set of these previous time 251 intervals. 253 There is no requirement that these characterization parameters be 254 based on exact measurements. In particular, these delay measurements 255 can be based on estimates of packet delays or aggregate measurements 256 of queue loading. This looseness is allowed to avoid placing undue 257 burdens on network element designs in which obtaining precise delay 258 measurements is difficult. 260 These delay parameters have an additive composition rule. For each 261 parameter the composition function computes the sum, enabling a setup 262 protocol to deliver the cumulative sum along the path to the end 263 nodes. 265 The delays are measured in units of one microsecond. An individual 266 element can advertise a delay value between 1 and 2**28 (somewhat 267 over two minutes) and the total delay added across all elements can 268 range as high as 2**32-1. Should the sum of the different elements 269 delay exceed 2**32-1, the end-to-end advertised delay should be 270 2**32-1. 272 Note that while the granularity of measurement is microseconds, a 273 conforming element is free to measure delays more loosely. The 274 minimum requirement is that the element estimate its delay accurately 275 to the nearest 100 microsecond granularity. Elements that can 276 measure more accurately are, of course, encouraged to do so. 278 NOTE: Measuring in milliseconds is not acceptable, because if the 279 minimum delay value is a millisecond, a path with several hops 280 will lead to a composed delay of at least several milliseconds, 281 which is likely to be misleading. 283 The characterization parameters may be represented as a sequence of 284 nine 32-bit unsigned integers in network byte order. The first three 285 integers are the parameters for T=1, T=60 and T=3600 for level 1, the 286 next three integers are for T=1, T=60, T=3600 for level 2, and the 287 last three integers are for T=1, T=60, T=3600 for level 3. 289 The following values are assigned from the characterization parameter 290 namespace. 292 The controlled delay service is service_name 1. 294 The delay characterization parameters receive parameter_number's one 295 through nine, in the order given above. That is, 297 parameter_name definition 299 1 Service Level = 1, T = 1 300 2 Service Level = 1, T = 60 301 3 Service Level = 1, T = 3600 302 4 Service Level = 2, T = 1 303 5 Service Level = 2, T = 60 304 6 Service Level = 2, T = 3600 305 7 Service Level = 3, T = 1 306 8 Service Level = 3, T = 60 307 9 Service Level = 3, T = 3600 309 The end-to-end composed results are assigned parameter_names N+10, 310 where N is the value of the per-hop name given above. 312 No other exported data is required by this specification. 314 Policing 316 Policing is done at the edge of the network, at all heterogeneous 317 source branch points and at all source merge points. A heterogeneous 318 source branch point is a spot where the multicast distribution tree 319 from a source branches to multiple distinct paths, and the TSpec's of 320 the reservations on the various outgoing links are not all the same. 322 Policing need only be done if the TSpec on the outgoing link is "less 323 than" (in the sense described in the Ordering section) the TSpec 324 reserved on the immediately upstream link. A source merge point is 325 where the multicast distribution trees from two different sources 326 (sharing the same reservation) merge. It is the responsibility of 327 the invoker of the service (a setup protocol, local configuration 328 tool, or similar mechanism) to identify points where policing is 329 required. Policing is allowed at points other than those mentioned 330 above. 332 The token bucket parameters require that traffic must obey the rule 333 that over all time periods, the amount of data sent cannot exceed 334 rT+b, where r and b are the token bucket parameters and T is the 335 length of the time period. For the purposes of this accounting, 336 links must count packets that are smaller than the minimal policing 337 unit to be of size m. Packets that arrive at an element and cause a 338 violation of the the rT+b bound are considered nonconformant. 339 Policing to conformance with this token bucket is done in two 340 different ways. At all policing point, non-conforming packets are 341 treated as best-effort datagrams. [If and when a marking ability 342 becomes available, these nonconformant packets should be ``marked'' 343 as being non-compliant and then treated as best effort packets at all 344 subsequent routers.] Other actions, such as delaying packets until 345 they are compliant, are not allowed. 347 NOTE: The prohibition on delaying packets is open to discussion. 348 It may be better to permit some delaying of a packet if that delay 349 would allow it to pass the policing function. (In other words, to 350 reshape the traffic). The challenge is to define a viable 351 reshaping function. 353 Intuitively, a plausible approach is to allow a delay of (roughly) 354 up to the maximum queueing delay experienced by completely 355 conforming packets before declaring that a packet has failed to 356 pass the policing function. The merit of this approach, and the 357 precise wording of the specification that describes it, require 358 further study. 360 A related issue is that at all network elements, packets bigger than 361 the MTU of the link must be considered nonconformant and should be 362 classified as best effort (and will then either be fragmented or 363 dropped according to the element's handling of best effort traffic). 364 [Again, if marking is available, these reclassified packets should be 365 marked.] 367 Ordering and Merging 369 TSpec's are ordered according to the following rule: TSpec A is a 370 substitute ("as good or better than") for TSpec B if (1) both the 371 token bucket depth and rate for TSpec A are greater than or equal to 372 those of TSpec B, (2) the minimum policed unit m is at least as small 373 for TSpec A as it is for TSpec B, and (3) the maximum packet size M 374 is at least as large for TSpec A as it is for TSpec B. 376 A merged TSpec may be calculated over a set of TSpecs by taking the 377 largest token bucket rate, largest bucket size, smallest minimal 378 policed unit, and largest maximum packet size across all members of 379 the set. This use of the word "merging" is similar to that in the 380 RSVP protocol; a merged TSpec is one that is adequate to describe the 381 traffic from any one of a number of flows. 383 Service request specifications (RSpecs) are ordered by their 384 numerical values (in inverse order); service level 1 is substitutable 385 for service level 2 and 3, and service level 2 is substitutable for 386 service level 3. 388 Guidelines for Implementors 390 It is expected that the service levels implemented at a particular 391 element will offer significantly different levels of delay control. 392 There seems little advantage in offering levels that differ only 393 slightly in the level of delay control. So, while a particular 394 element may offer less than three levels of service, the levels of 395 service it does offer should have notably different queueing delays. 397 NOTE: An additional service currently being considered is the 398 "predictive" service described in [3]. It is expected that if an 399 element offers both predictive service and controlled delay 400 service, that it should not implement both but should use the 401 predictive service as a controlled delay service. This is allowed 402 since (1) the required behavior of predictive service meets all of 403 the requirements of controlled delay service, (2) the invocations 404 are compatible, and (3) the ordering relationships defined in the 405 predictive service specification document are such that a given 406 level of predictive service is at least as good as the same level 407 of controlled delay service. The inter-service mapping with 408 predictive service, mentioned above, is omitted from the "Ordering 409 and Merging" section of this draft of the controlled delay service 410 specification because the exact definition of both services is 411 still under discussion. Should the final definitions include an 412 inter-service mapping function, the Ordering and Merging sections 413 of each document might contain words similar to the following: 415 "In addition, the controlled delay service is related to the 416 predictive service in the sense that a given level of predictive 417 service is considered at least as good as the same level of 418 controlled delay service. See additional comments in the 419 guidelines section." 421 Network elements are permitted to oversubscribe their traffic, where 422 by oversubscribe, we mean that the sum of the token buckets of the 423 controlled delay traffic exceeds the maximum throughput or buffer 424 space of the router. However, given the requirement of low loss, 425 this oversubscribing should only be done in cases where the element 426 is quite sure that actual utilization is far less than the sum of the 427 token buckets would suggest. A more conservative approach is to 428 reject new flows, when the addition of their traffic would cause the 429 sums of the token buckets to exceed the capacity of the network 430 element. 432 Evaluation Criteria 434 Evaluating a network element's implementation of controlled delay 435 service is somewhat difficult, since the quality of service depends 436 on overall traffic load, the traffic pattern presented and the degree 437 of delay control implemented. In this section we sketch out a 438 methodology for testing an element's controlled delay service. 440 The idea is that one chooses a particular traffic mix (for instance, 441 30 percent level 1, 10 percent level 2, 20 percent level 3 and 40 442 percent uncontrolled best-effort traffic) and loads the network 443 element with progressively higher amounts of this traffic mix (i.e., 444 40% of capacity, then 50% of capacity, on beyond 100% capacity). For 445 each load level, one measures the utilization, mean delays, and the 446 packet loss rate for each level of service (including best effort). 447 Each test run at a particular load should involve enough traffic that 448 is a reasonable predictor of the performance a long-lived application 449 such as a video conference would experience (e.g., an hour or more of 450 traffic). 452 This memo does not specify particular traffic mixes to test. 453 However, we expect in the future that as the nature of real-time 454 Internet traffic is better understood, the traffic used in these 455 tests will be chosen to reflect the current and future Internet load. 457 Examples of Implementation 459 A possible implementation of controlled delay service would be to 460 have a queueing mechanism with three priority levels, with level 1 461 packets being highest priority and level 3 packets being lowest 462 priority. Each controlled delay service level would be associated 463 with a target queue utilization level, say 20% for level 1, 50% for 464 the combination of levels 1 and 2, and 70% for the combination of all 465 three levels. The utilization of the link, by each of the three 466 levels, would be measured over some relatively short time period 467 (say, 5 seconds, or 10000 MTU packet transmission times). A new flow 468 would be admitted to level 1 if the measured usage of level 1, plus 469 the token bucket rate of the new flow, was below the target 470 utilization of level 1. Similarly, a new flow would be admitted to 471 level 2 if the measured usage of levels 1 and 2, plus the token 472 bucket rate of the new flow, was below the target utilization of 473 levels 1 and 2. 475 Examples of Use 477 We give two examples of use, both involving an interactive 478 application. 480 In the first example, we assume that either the receiving application 481 is ignoring characterizations or the network is not delivering the 482 characterizations to the end-nodes. We further assume that the 483 application's data transmission units is timestamped. The receiver, 484 by inspecting the timestamps, can determine the end-to-end delays and 485 react if they are excessive. If so, then the application asks for a 486 better level of service. If the delays are well below the required 487 level, the application can ask for a worse level of service. A 488 protocol useful to applications providing this capability is the 489 proposed IETF Real-Time Transport Protocol [2]. 491 In the second example, we assume that characterization parameters are 492 delivered to the receiving application. The receiver chooses the 493 service level whose characterizations for the maximal delays for all 494 intervals are under the required level after network latencies are 495 considered. If the actual delays during the course of operation are 496 worse than expected, the application can ask for a better level of 497 service. 499 Security Considerations 501 Security considerations are not discussed in this memo. 503 References 505 [1] S. Shenker and J. Wroclawski. "Network Element Service 506 Specification Template", Internet Draft, June 1995, 509 [2] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson. "RTP: 510 A Transport Protocol for Real-Time Applications", Internet Draft, 511 March 1995, 513 [3] S. Shenker, C. Partridge, B. Davie, and L. Breslau. 514 "Specification of Predictive Quality of Service", Internet Draft, ?? 515 1995, 517 Authors' Address: 519 Scott Shenker 520 Xerox PARC 521 3333 Coyote Hill Road 522 Palo Alto, CA 94304-1314 523 shenker@parc.xerox.com 524 415-812-4840 525 415-812-4471 (FAX) 527 Craig Partridge 528 BBN 529 2370 Amherst St 530 Palo Alto, CA 94306 531 craig@bbn.com 533 John Wroclawski 534 MIT Laboratory for Computer Science 535 545 Technology Sq. 536 Cambridge, MA 02139 537 jtw@lcs.mit.edu 538 617-253-7885 539 617-253-2673 (FAX)