idnits 2.17.1 draft-ietf-soc-overload-rate-control-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 10, 2014) is 3479 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: '0' on line 492 == Missing Reference: 'T' is mentioned on line 492, but not defined ** Downref: Normative reference to an Informational RFC: RFC 5390 Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 SOC Working Group Eric Noel 2 Internet-Draft AT&T Labs 3 Intended status: Standards Track Philip M. Williams 4 Expires: April 10, 2015 BT Innovate & Design 6 October 10, 2014 8 Session Initiation Protocol (SIP) Rate Control 9 draft-ietf-soc-overload-rate-control-10.txt 11 Abstract 13 The prevalent use of Session Initiation Protocol (SIP) in Next 14 Generation Networks necessitates that SIP networks provide adequate 15 control mechanisms to maintain transaction throughput by preventing 16 congestion collapse during traffic overloads. Already a loss-based 17 solution to remedy known vulnerabilities of the SIP 503 (service 18 unavailable) overload control mechanism has been proposed. This 19 document proposes a rate-based control scheme to complement the 20 loss-based control scheme, using the same signaling. 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other documents 34 at any time. It is inappropriate to use Internet-Drafts as 35 reference material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on January 22, 2015. 39 Copyright Notice 41 Copyright (c) 2014 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with 49 respect to this document. Code Components extracted from this 50 document must include Simplified BSD License text as described in 51 Section 4.e of the Trust Legal Provisions and are provided without 52 warranty as described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction...................................................2 57 2. Terminology....................................................3 58 3. Rate-based algorithm scheme....................................3 59 3.1. Overview..................................................3 60 3.2. Via header field parameters for overload control..........4 61 3.3. Client and server rate-control algorithm selection........5 62 3.4. Server operation..........................................5 63 3.5. Client operation..........................................6 64 3.5.1. Default algorithm....................................6 65 3.5.2. Priority treatment..................................10 66 3.5.3. Optional enhancement: avoidance of resonance........12 67 4. Example.......................................................13 68 5. Syntax........................................................14 69 6. Security Considerations.......................................14 70 7. IANA Considerations...........................................14 71 8. References....................................................15 72 8.1. Normative References.....................................15 73 8.2. Informative References...................................15 74 Appendix A. Contributors.........................................16 75 Appendix B. Acknowledgments......................................16 77 1. Introduction 79 The use of SIP in large-scale Next Generation Networks requires that 80 SIP-based networks provide adequate control mechanisms for handling 81 traffic growth. In particular, SIP networks must be able to handle 82 traffic overloads gracefully, maintaining transaction throughput by 83 preventing congestion collapse. 85 A promising SIP-based overload control solution has been proposed in 86 [RFC7339]. That solution provides a communication scheme for 87 overload control algorithms. It also includes a default loss-based 88 overload control algorithm that makes it possible for a set of 89 clients to limit offered load towards an overloaded server. 91 However, such a loss control algorithm is sensitive to variations in 92 load so that any increase in load would be directly reflected by the 93 clients in the offered load presented to the overloaded servers. 94 More importantly, a loss-based control scheme cannot guarantee an 95 upper bound on the clients offered load from the clients towards an 96 overloaded server and requires frequent updates which may have 97 implications for stability. 99 In accordance with the framework defined in [RFC7339], this document 100 proposes an alternate overload control, the rate-based overload 101 control algorithm. The rate-based control guarantees an upper bound 102 on the rate, constant between server updates, of requests sent by 103 clients towards an overloaded server. The tradeoff is in terms of 104 algorithmic complexity, since the overloaded server is more likely 105 to use a different target (maximum rate) for each client than the 106 loss-based approach. 108 The proposed rate-based overload control algorithm mitigates 109 congestion in SIP networks while adhering to the overload signaling 110 scheme in [RFC7339] and presenting a rate-based control as an 111 optional alternative to the default loss-based control scheme in 112 [RFC7339]. 114 2. Terminology 116 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 117 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 118 document are to be interpreted as described in [RFC2119]. 120 Unless otherwise specified, all SIP entities described in this 121 document are assumed to support this specification. 123 3. Rate-based algorithm scheme 125 3.1. Overview 127 The server is the one protected by the overload control algorithm 128 defined here, and the client is the one that throttles traffic 129 towards the server. 131 Following the procedures defined in [RFC7339], the server and 132 clients signal one another support for rate-based overload control. 134 Then periodically, the server relies on internal measurements (e.g., 135 CPU utilization or queueing delay) to evaluate its overload state 136 and estimate a target maximum SIP request rate in number of request 137 per second (as opposed to target percent loss in the case of loss- 138 based control). 140 When in overload, the server uses the Via header field oc parameter 141 [RFC7339] of SIP responses in order to inform the clients of its 142 overload state and of the target maximum SIP request rate for that 143 client. 145 Upon receiving the oc parameter with a target maximum SIP request 146 rate, each client throttles new SIP requests towards the overloaded 147 server. 149 3.2. Via header field parameters for overload control 151 The use of the Via header field oc parameter informs clients of the 152 desired maximum rate. They are defined in [RFC7339] and summarized 153 below: 155 oc: Used by clients in SIP requests to indicate [RFC7339] support 156 and by servers to indicate the load reduction amount in the loss 157 algorithm, and the maximum rate, in messages per second, for the 158 rate based algorithm described here. oc-algo: Used by clients in SIP 159 requests to advertise supported overload control algorithms and by 160 servers to notify clients of the algorithm in effect. Supported 161 values: loss (default), rate (optional). 163 oc-validity: Used by servers in SIP responses to indicate an 164 interval of time (msec) that the load reduction should be in effect. 165 A value of 0 is reserved for the server to stop overload control. A 166 non-zero value is required in all other cases. 168 oc-seq: A sequence number associated with the "oc" parameter. 170 Consult Section 4 for an illustration of the Via header field oc 171 parameter usage 173 3.3. Client and server rate-control algorithm selection 175 Per [RFC7339], new clients indicate supported overload control 176 algorithms to servers by inserting oc and oc-algo, with the names of 177 the supported algorithms, in the Via header field of SIP requests 178 destined to servers. The inclusion by the client of the token 179 "rate" indicates that the client supports a rate based algorithm. 180 Conversely, servers notify clients of the selected overload control 181 algorithm through the oc-algo parameter in the Via header field of 182 SIP responses to clients. The inclusion by the server of the token 183 "rate" in the oc-algo parameter indicates that the rate-based 184 algorithm has been selected by the server. 186 Support of rate-based control MUST be indicated by clients including 187 the token "rate" in the oc-algo list. Selection of rate-based 188 control MUST be indicated by servers by setting oc-algo to the token 189 "rate". 191 3.4. Server operation 193 The actual algorithm used by the server to determine its overload 194 state and estimate a target maximum SIP request rate is beyond the 195 scope of this document. 197 However, the server MUST periodically evaluate its overload state 198 and estimate a target SIP request rate beyond which it would become 199 overloaded. The server must determine how it will allocate the 200 target SIP request rate among its client. The server may set the 201 same rate for every client, or may set different rates for different 202 clients. 204 The maximum rate determined by the server for a client applies to 205 the entire stream of SIP requests, even though throttling may only 206 affect a particular subset of the requests, since as per [RFC7339] 207 and REQ 13 of [RFC5390], request prioritization is a client's 208 responsibility. 210 When setting the maximum rate for a particular client, the server 211 may need take into account the workload (e.g., CPU load per request) 212 of the distribution of message types from that client. Furthermore, 213 because the client may prioritize the specific types of messages it 214 sends while under overload restriction, this distribution of message 215 types may be different from (e.g., either higher or lower CPU load) 216 the message distribution for that client under non-overload 217 conditions 219 Note that the "oc" parameter for the rate algorithm is an upper 220 bound (in messages per second) on the traffic sent by the client to 221 the server. The client may send traffic at a rate significantly 222 lower than the upper bound for a variety of reasons 224 In other words, when multiple clients are being controlled by an 225 overloaded server, at any given time some clients may receive 226 requests at a rate below their target (maximum) SIP request rate 227 while others above that target rate. But the resulting request rate 228 presented to the overloaded server will converge towards the target 229 SIP request rate. 231 Upon detection of overload and the determination to invoke overload 232 controls, the server MUST follow the specifications in [RFC7339] to 233 notify its clients of the allocated target SIP request rate and that 234 rate-based control is in effect. 236 The server MUST use the [RFC7339] "oc" parameter to send a target 237 SIP request rate to each of its clients. 239 When a client supports the default loss algorithm and not the rate 240 algorithm, the client would be handled in the same way as in draft- 241 ietf-soc-overload-control section 5.10.2. 243 3.5. Client operation 245 3.5.1. Default algorithm 247 In determining whether or not to transmit a specific message, the 248 client may use any algorithm that limits the message rate to the 249 "oc" parameter in units of messages per second. For ease of 250 discussion, we define T = 1/["oc" parameter] as the target inter-SIP 251 request interval. The algorithm may be strictly deterministic, or it 252 may be probabilistic. It may, or may not, have a tolerance factor, 253 to allow for short bursts, as long as the long term rate remains 254 below 1/T. 256 The algorithm may have provisions for prioritizing traffic in 257 accordance with REQ 13 of [RFC5390]. 259 If the algorithm requires other parameters (in addition to "T", 260 which is 1/["oc" parameter]), they may be set autonomously by the 261 client, or they may be negotiated between client and server 262 independently of the SIP-based overload control solution. 264 In either case, the coordination is out-of-scope for this document. 265 The default algorithms presented here (one without provisions for 266 prioritizing traffic, one with) are only examples. 268 To throttle new SIP requests at the rate specified by the "oc" 269 parameter sent by the server to its clients, the client MAY use the 270 proposed default algorithm for rate-based control or any other 271 equivalent algorithm that forward messages in conformance with the 272 upper bound of 1/T messages per second. 274 The default leaky bucket algorithm presented here is based on [ITU-T 275 Rec. I.371] Appendix A.2. The algorithm makes it possible for 276 clients to deliver SIP requests at a rate specified by the "oc" 277 parameter with tolerance parameter TAU (preferably configurable). 279 Conceptually, the leaky bucket algorithm can be viewed as a finite 280 capacity bucket whose real-valued content drains out at a continuous 281 rate of 1 unit of content per time unit and whose content increases 282 by the increment T for each forwarded SIP request. T is computed as 283 the inverse of the rate specified by the "oc" parameter, namely T = 284 1 / ["oc" parameter]. 286 Note that when the "oc" parameter is 0 with a non-zero oc-validity, 287 then the client should reject 100% of SIP requests destined to the 288 overload server. However, when the oc-validity value is 0, the 289 client should immediately stop throttling. 291 If, at a new SIP request arrival, the content of the bucket is less 292 than or equal to the limit value TAU, then the SIP request is 293 forwarded to the server; otherwise, the SIP request is rejected. 295 Note that the capacity of the bucket (the upper bound of the 296 counter) is (T + TAU). 298 The tolerance parameter TAU determines how close the long-term 299 admitted rate is to an ideal control that would admit all SIP 300 requests for arrival rates less than 1/T and then admit SIP requests 301 precisely at the rate of 1/T for arrival rates above 1/T. In 302 particular at mean arrival rates close to 1/T, it determines the 303 tolerance to deviation of the inter-arrival time from T (the larger 304 TAU the more tolerance to deviations from the inter-departure 305 interval T). 307 This deviation from the inter-departure interval influences the 308 admitted rate burstiness, or the number of consecutive SIP requests 309 forwarded to the server (burst size proportional to TAU over the 310 difference between 1/T and the arrival rate). 312 In situations where clients are configured with some knowledge about 313 the server (e.g., operator pre-provisioning), it can be beneficial 314 to choose a value of TAU based on how many clients will be sending 315 requests to the server. 317 Servers with a very large number of clients, each with a relatively 318 small arrival rate, will generally benefit from a smaller value for 319 TAU in order to limit queuing (and hence response times) at the 320 server when subjected to a sudden surge of traffic from all clients. 321 Conversely, a server with a relatively small number of clients, each 322 with proportionally larger arrival rate, will benefit from a larger 323 value of TAU. 325 Once the control has been activated, at the arrival time of the k-th 326 new SIP request, ta(k), the content of the bucket is provisionally 327 updated to the value 329 X' = X - (ta(k) - LCT) 330 where X is the value of the leaky bucket counter after arrival of 331 the last forwarded SIP request, and LCT is the time at which the 332 last SIP request was forwarded. 334 If X' is less than or equal to the limit value TAU, then the new SIP 335 request is forwarded and the leaky bucket counter X is set to X' (or 336 to 0 if X' is negative) plus the increment T, and LCT is set to the 337 current time ta(k). If X' is greater than the limit value TAU, then 338 the new SIP request is rejected and the values of X and LCT are 339 unchanged. 341 When the first response from the server has been received indicating 342 control activation (oc-validity>0), LCT is set to the time of 343 activation, and the leaky bucket counter is initialized to the 344 parameter TAU0 (preferably configurable) which is 0 or larger but 345 less than or equal to TAU. 347 TAU can assume any positive real number value and is not necessarily 348 bounded by T. 350 TAU=4*T is a reasonable compromise between burst size and throttled 351 rate adaptation at low offered rates. 353 Note that specification of a value for TAU and any communication or 354 coordination between servers are beyond the scope of this document. 356 A reference algorithm is shown below. 358 No priority case: 360 // T: inter-transmission interval, set to 1 / ["oc" parameter] 361 // TAU: tolerance parameter 362 // ta: arrival time of the most recent arrival received by the 363 // client 364 // LCT: arrival time of last SIP request that was sent to the server 365 // (initialized to the first arrival time) 366 // X: current value of the leaky bucket counter (initialized to 367 // TAU0) 369 // After most recent arrival, calculate auxiliary variable Xp 370 Xp = X - (ta - LCT); 371 if (Xp <= TAU) { 372 // Transmit SIP request 373 // Update X and LCT 374 X = max (0, Xp) + T; 375 LCT = ta; 376 } else { 377 // Reject SIP request 378 // Do not update X and LCT 379 } 381 3.5.2. Priority treatment 383 As with the loss-based algorithm of [RFC7339], a client implementing 384 the rate-based algorithm also prioritizes messages into two or more 385 categories of requests: 386 Requests that are candidates for reduction and requests not subject 387 to reduction (except under extenuating circumstances when there 388 aren't any messages in the first category that can be reduced). 390 Accordingly, the proposed leaky bucket implementation is modified to 391 support priority using two thresholds for SIP requests in the set of 392 request candidates for reduction. With two priorities, the proposed 393 leaky bucket requires two thresholds TAU1 < TAU2: 394 . All new requests would be admitted when the leaky bucket 395 counter is at or below TAU1, 396 . Only higher priority requests would be admitted when the leaky 397 bucket counter is between TAU1 and TAU2, 398 . All requests would be rejected when the bucket counter is at or 399 above TAU2. 400 This can be generalized to n priorities using n thresholds for n>2 401 in the obvious way. 403 With a priority scheme that relies on two tolerance parameters (TAU2 404 influences the priority traffic, TAU1 influences the non-priority 405 traffic), always set TAU1 < TAU2 (TAU is replaced by TAU1 and TAU2). 406 Setting both tolerance parameters to the same value is equivalent to 407 having no priority. TAU1 influences the admitted rate the same way 408 as TAU does when no priority is set. And the larger the difference 409 between TAU1 and TAU2, the closer the control is to strict priority 410 queueing. 412 TAU1 and TAU2 can assume any positive real number value and are not 413 necessarily bounded by T. 415 Reasonable values for TAU0, TAU1 and TAU2 are: TAU0 = 0, TAU1 = 1/2 416 * TAU2 and TAU2 = 10 * T. 418 Note that specification of a value for TAU1 and TAU2 and any 419 communication or coordination between servers are beyond the scope 420 of this document. 422 A reference algorithm is shown below. 424 Priority case: 426 // T: inter-transmission interval, set to 1 / ["oc" parameter] 427 // TAU1: tolerance parameter of no-priority SIP requests 428 // TAU2: tolerance parameter of priority SIP requests 429 // ta: arrival time of the most recent arrival received by the 430 // client 431 // LCT: arrival time of last SIP request that was sent to the server 432 // (initialized to the first arrival time) 433 // X: current value of the leaky bucket counter (initialized to 434 // TAU0) 436 // After most recent arrival, calculate auxiliary variable Xp 437 Xp = X - (ta - LCT); 439 if (AnyRequestReceived && Xp <= TAU1) || (PriorityRequestReceived && 440 Xp <= TAU2 && Xp > TAU1) { 441 // Transmit SIP request 442 // Update X and LCT 443 X = max (0, Xp) + T; 444 LCT = ta; 445 } else { 446 // Reject SIP request 447 // Do not update X and LCT 448 } 450 3.5.3. Optional enhancement: avoidance of resonance 452 As the number of client sources of traffic increases or the 453 throughput of the server decreases, the maximum rate admitted by 454 each client needs to decrease, and therefore the value of T becomes 455 larger. Under some circumstances, e.g., if the traffic arises very 456 quickly simultaneously at many sources, the occupancies of each 457 bucket can become synchronized, resulting in the admissions from 458 each source being close in time and batched or very 'peaky' arrivals 459 at the server, which not only gives rise to control instability, but 460 also very poor delays and even lost messages. An appropriate term 461 for this is 'resonance' [Erramilli]. 463 If the network topology is such that resonance can occur, then a 464 simple way to avoid resonance is to randomize the bucket occupancy 465 at two appropriate points: At the activation of control and whenever 466 the bucket empties, as follows. 468 After updating the value of the leaky bucket to X', generate a value 469 u as follows: 471 if X' > 0, then u = 0 473 else if X' <= 0 then let u be set to a random value uniformly 474 distributed between -1/2 and +1/2 475 Then (only) if the arrival is admitted, increase the bucket by an 476 amount T + uT, which will therefore be just T if the bucket hadn't 477 emptied, or lie between T/2 and 3T/2 if it had. 479 This randomization should also be done when control is activated, 480 i.e., instead of simply initializing the leaky bucket counter to 481 TAU0, initialize it to TAU0 + uT, where u is uniformly distributed 482 as above. Since activation would have been a result of response to a 483 request sent by the client, the second term in this expression can 484 be interpreted as being the bucket increment following that 485 admission. 487 This method has the following characteristics: 489 . If TAU0 is chosen to be equal to TAU and all sources were to 490 activate control at the same time due to an extremely high 491 request rate, then the time until the first request admitted by 492 each client would be uniformly distributed over [0,T]; 494 . The maximum occupancy is TAU + (3/2)T, rather than TAU + T 495 without randomization; 497 . For the special case of 'classic gapping' where TAU=0, then the 498 minimum time between admissions is uniformly distributed over 499 [T/2, 3T/2], and the mean time between admissions is the same, 500 i.e., T+1/R where R is the request arrival rate; 502 . As high load randomization rarely occurs, so there is no loss 503 of precision of the admitted rate, even though the randomized 504 'phasing' of the buckets remains. 506 4. Example 508 Adapting the example in section 6.2 of [RFC7339], where client P1 509 sends requests to a downstream server P2: 511 INVITE sips:user@example.com SIP/2.0 513 Via: SIP/2.0/TLS p1.example.net; 515 branch=z9hG4bK2d4790.1;received=192.0.2.111; 517 oc;oc-algo="loss,rate" 519 ... 521 SIP/2.0 100 Trying 523 Via: SIP/2.0/TLS p1.example.net; 525 branch=z9hG4bK2d4790.1;received=192.0.2.111; 527 oc=0;oc-algo="rate";oc-validity=0; 529 oc-seq=1282321615.781 531 ... 533 In the messages above, the first line is sent by P1 to P2. This 534 line is a SIP request; because P1 supports overload control, it 535 inserts the "oc" parameter in the topmost Via header field that it 536 created. P1 supports two overload control algorithms: loss and rate. 538 The second line, a SIP response, shows the top most Via header field 539 amended by P2 according to this specification and sent to P1. 540 Because P2 also supports overload control, it chooses the rate-based 541 scheme and sends that back to P1 in the oc-algo parameter. It uses 542 oc-validity=0 to indicate no overload control. In this example oc=0, 543 but oc could be any value as oc is ignored when oc-validity=0. 545 At some later time, P2 starts to experience overload. It sends the 546 following SIP message indicating P1 should send SIP requests at a 547 rate no greater than or equal to 150 SIP requests per second and for 548 a duration of 1,000 msec. 550 SIP/2.0 180 Ringing 552 Via: SIP/2.0/TLS p1.example.net; 554 branch=z9hG4bK2d4790.1;received=192.0.2.111; 556 oc=150;oc-algo="rate";oc-validity=1000; 558 oc-seq=1282321615.782 560 ... 562 5. Syntax 564 This specification extends the existing definition of the Via header 565 field parameters of [RFC3261] as follows: 567 algo-list /= "rate" 569 6. Security Considerations 571 Aside from the resonance concerns discussed in Section 3.5.3, this 572 mechanism does not introduce any security concerns beyond the 573 general overload-control security issues discussed in [RFC7339]. 574 Methods to mitigate the risk of resonance are discussed in Section 575 3.5.3. 577 7. IANA Considerations 579 Header Field Parameter Name Predefined Values Reference 580 _______________________________________________________ 582 Via oc-algo Yes RFC7339 RFCOPRQ 584 RFCOPRQ [NOTE TO RFC-EDITOR: Please replace with final RFC number of 585 draft-ietf-soc-overload-rate-control] 587 8. References 589 8.1. Normative References 591 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 592 Requirement Levels", BCP 14, RFC 2119, March 1997. 594 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 595 A., Peterson, J., Sparks, R., Handley, M., and E. 596 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 597 June 2002. 599 [RFC5390] Rosenberg, J., "Requirements for Management of Overload in 600 the Session Initiation Protocol", RFC 5390, December 2008 602 [RFC7339] 603 Gurbani, V., Hilt, V., Schulzrinne, H., "Session 604 Initiation Protocol (SIP) Overload Control", RFC 7339, 605 September 2014. 607 8.2. Informative References 609 [ITU-T Rec. I.371] 610 "Traffic control and congestion control in B-ISDN", ITU-T 611 Recommendation I.371. 613 [Erramilli] 614 A. Erramilli and L. J. Forys, "Traffic Synchronization 615 Effects In Teletraffic Systems", ITC-13, 1991. 617 Appendix A. Contributors 619 Significant contributions to this document were made by Janet Gunn. 621 Appendix B. Acknowledgments 623 Many thanks for comments and feedback on this document to: Richard 624 Barnes, Keith Drage, Vijay Gurbany, Volker Hilt, Christer Holmberg, 625 Winston Hong, Peter Yee, and James Yu. 627 This document was prepared using 2-Word-v2.0.template.dot. 629 Authors' Addresses 631 Eric Noel 632 AT&T Labs 633 200 S Laurel Avenue 634 Middletown, NJ 07747 635 USA 637 Philip M. Williams 638 BT Innovate & Design 639 Ipswich, IP5 3RE 640 UK