idnits 2.17.1 draft-ietf-sipping-overload-reqs-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 602. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 613. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 620. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 626. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 175: '... server MAY indicate when the c...' RFC 2119 keyword, line 177: '... client MUST act as if it had r...' RFC 2119 keyword, line 181: '... SHOULD attempt to forward the ...' RFC 2119 keyword, line 182: '... It SHOULD NOT forward any othe...' RFC 2119 keyword, line 185: '... Servers MAY refuse the connect...' (1 more instance...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 17, 2007) is 5998 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-20) exists of draft-ietf-sip-outbound-10 Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIPPING J. Rosenberg 3 Internet-Draft Cisco 4 Intended status: Informational November 17, 2007 5 Expires: May 20, 2008 7 Requirements for Management of Overload in the Session Initiation 8 Protocol 9 draft-ietf-sipping-overload-reqs-01 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on May 20, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2007). 40 Abstract 42 Overload occurs in Session Initiation Protocol (SIP) networks when 43 proxies and user agents have insuffient resources to complete the 44 processing of a request. SIP provides limited support for overload 45 handling through its 503 response code, which tells an upstream 46 element that it is overloaded. However, numerous problems have been 47 identified with this mechanism. This draft summarizes the problems 48 with the existing 503 mechanism, and provides some requirements for a 49 solution. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 54 2. Causes of Overload . . . . . . . . . . . . . . . . . . . . . . 3 55 3. Current SIP Mechanisms . . . . . . . . . . . . . . . . . . . . 5 56 4. Problems with the Mechanism . . . . . . . . . . . . . . . . . 5 57 4.1. Load Amplification . . . . . . . . . . . . . . . . . . . . 6 58 4.2. Underutilization . . . . . . . . . . . . . . . . . . . . . 9 59 4.3. The Off/On Retry-After Problem . . . . . . . . . . . . . . 9 60 4.4. Ambiguous Usages . . . . . . . . . . . . . . . . . . . . . 10 61 5. Solution Requirements . . . . . . . . . . . . . . . . . . . . 10 62 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 63 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 64 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 65 9. Informative References . . . . . . . . . . . . . . . . . . . . 13 66 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 14 67 Intellectual Property and Copyright Statements . . . . . . . . . . 15 69 1. Introduction 71 Overload occurs in Session Initiation Protocol (SIP) [RFC3261] 72 networks when proxies and user agents have insuffient resources to 73 complete the processing of a request or a response. SIP provides 74 limited support for overload handling through its 503 response code, 75 which tells an upstream element that it is overloaded. However, 76 numerous problems have been identified with this mechanism. 78 This draft describes the general problem of SIP overload, and then 79 reviews the current SIP mechanisms for dealing with overload. It 80 then explains some of the problems with these mechanisms. Finally, 81 the document provides a set of requirements for fixing these 82 problems. 84 2. Causes of Overload 86 Overload occurs when an element, such as a SIP user agent or proxy, 87 has insufficient resources to successfully process all of the traffic 88 it is receiving. Resources include all of the capabilities of the 89 element used to process a request, including CPU processing, memory, 90 I/O, or disk resources. It can also include external resources, such 91 as a database or DNS server, in which case the CPU, processing, 92 memory, I/O and disk resources of those servers are effectively part 93 of the logical element processing the request. Overload can occur 94 for many reasons, including: 96 Poor Capacity Planning: SIP networks need to be designed with 97 sufficient numbers of servers, hardware, disks, and so on, in 98 order to meet the needs of the subscribers they are expected to 99 serve. Capacity planning is the process of determining these 100 needs. It is based on the number of expected subscribers and the 101 types of flows they are expected to use. If this work is not done 102 properly, the network may have insufficient capacity to handle 103 predictable usages, including regular usages and predictably high 104 ones (such as high voice calling volumes on Mothers Day). 106 Dependency Failures: A SIP element can become overloaded because a 107 resource on which it is dependent has failed or become overloaded, 108 greatly reducing the logical capacity of the element. In these 109 cases, even minimal traffic might cause the server to go into 110 overload. Examples of such dependency overloads include DNS 111 servers, databases, disks and network interfaces. 113 Component Failures: A SIP element can become overloaded when it is a 114 member of a cluster of servers which each share the load of 115 traffic, and one or more of the other members in the cluster fail. 116 In this case, the remaining elements take over the work of the 117 failed elements. Normally, capacity planning takes such failures 118 into account, and servers are typically run with enough spare 119 capacity to handle failure of another element. However, unusual 120 failure conditions can cause many elements to fail at once. This 121 is often the case with software failures, where a bad packet or 122 bad database entry hits the same bug in a set of elements in a 123 cluster. 125 Avalanche Restart: One of the most troubling sources of overload is 126 avalanche restart. This happens when a large number of clients 127 all simultaneously attempt to connect to the network with a SIP 128 registration. Avalanche restart can be caused by several events. 129 One is the "Manhattan Reboots" scenario, where there is a power 130 failure in a large metropolitan area, such as Manhattan. When 131 power is restored, all of the SIP phones, whether in PCs or 132 standalone devices, simultaneously power on and begin booting. 133 They will all then connect to the network and register, causing a 134 flood of SIP REGISTER messages. Another cause of avalanche 135 restart is failure of a large network connection, for example, the 136 access router for an enterprise. When it fails, SIP clients will 137 detect the failure rapidly using the mechanisms in 138 [I-D.ietf-sip-outbound]. When connectivity is restored, this is 139 detected, and clients re-REGISTER, all within a short time period. 140 Another source of avalanche restart is failure of a proxy server. 141 If clients had all connected to the server with TCP, its failure 142 will be detected, followed by re-connection and re-registration to 143 another server. Note that [I-D.ietf-sip-outbound] does provide 144 some remedies to this case. 146 Flash Crowds: A flash crowd occurs when an extremely large number of 147 users all attempt to simultaneously make a call. One example of 148 how this can happen is a television commercial that advertises a 149 number to call to receive a free gift. If the gift is compelling 150 and many people see the ad, many calls can be simultaneously made 151 to the same number. This can send the system into overload. 153 Unfortunately, the overload problem tends to compound itself. When a 154 network goes into overload, this can frequently cause failures of the 155 elements that are trying to process the traffic. This causes even 156 more load on the remaining elements. Furthermore, during overload, 157 the overall capacity of functional elements goes down, since much of 158 their resources are spent just rejecting or treating load that they 159 cannot actually process. In addition, overload tends to cause SIP 160 messages to be delayed or lost, which causes retransmissions to be 161 sent, further increasing the amount of work in the network. This 162 compounding factor can produce substantial multipliers on the load in 163 the system. Indeed, in the case of UDP, with as many as 7 164 retransmits of an INVITE request prior to timeout, overload can 165 multiply the already-heavy message volume by as much as seven! 167 3. Current SIP Mechanisms 169 SIP provides very basic support for overload. It defines the 503 170 response code, which is sent by an element that is overloaded. RFC 171 3261 defines it thusly: 173 The server is temporarily unable to process the request due to 174 a temporary overloading or maintenance of the server. The 175 server MAY indicate when the client should retry the request in 176 a Retry-After header field. If no Retry-After is given, the 177 client MUST act as if it had received a 500 (Server Internal 178 Error) response. 180 A client (proxy or UAC) receiving a 503 (Service Unavailable) 181 SHOULD attempt to forward the request to an alternate server. 182 It SHOULD NOT forward any other requests to that server for the 183 duration specified in the Retry-After header field, if present. 185 Servers MAY refuse the connection or drop the request instead of 186 responding with 503 (Service Unavailable). 188 The objective is to provide a mechanism to move the work of the 189 overloaded server to another server, so that the request can be 190 processed. The Retry-After header field, when present, is meant to 191 allow a server to tell an upstream element to back off for a period 192 of time, so that the overloaded server can work through its backlog 193 of work. 195 RFC3261 also instructs proxies to not forward 503 responses upstream, 196 at SHOULD NOT strength. This is to avoid the upstream server of 197 mistakingly concluding that the proxy is overloaded, when in fact the 198 problem was an element further downstream. 200 4. Problems with the Mechanism 202 At the surface, the 503 mechanism seems workable. Unfortunately, 203 this mechanism has had numerous problems in actual deployment. These 204 problems are described here. 206 4.1. Load Amplification 208 The principal problem with the 503 mechanism is that it tends to 209 substantially amplify the load in the network when the network is 210 overloaded, causing further escalation of the problem and introducing 211 the very real possibility of congestive collapse. Consider the 212 topology in Figure 2. 214 +------+ 215 > | | 216 / | S1 | 217 / | | 218 / +------+ 219 / 220 / 221 / 222 / 223 +------+ / +------+ 224 --------> | |/ | | 225 | P1 |---------> | S2 | 226 --------> | |\ | | 227 +------+ \ +------+ 228 \ 229 \ 230 \ 231 \ 232 \ 233 \ +------+ 234 \ | | 235 > | S3 | 236 | | 237 +------+ 239 Figure 2 241 Proxy P1 receives SIP requests from many sources, and acts solely as 242 a load balancer, proxying the requests to servers S1, S2 and S3 for 243 processing. The input load increases to the point where all three 244 servers become overloaded. Server S1, when it receives its next 245 request, generates a 503. However, because the server is loaded, it 246 might take some time to generate the 503. If SIP is being run over 247 UDP, this may result in request retransmissions which further 248 increase the work on S1. Even in the case of TCP, if the server is 249 loaded and the kernel cannot send TCP acknowledgements fast enough, 250 TCP retransmits may occur. When the 503 is received by P1, it 251 retries the request on S2. S2 is also overloaded, and eventually 252 generates a 503, but in the interim may also be hit with retransmits. 254 P1 once again tries another server, this time S3, which also 255 eventually rejects it with a 503. 257 Thus, the processing of this request, which ultimately failed, 258 involved four SIP transactions, each of which may have involved many 259 retransmissions - up to 7 in the case of UDP. Thus, under unloaded 260 conditions, a single request from a client would generate one request 261 (to S1, S2 or S3) and two responses. When the network is overloaded, 262 a single request from the client, before timing out, could generate 263 as many as 18 requests and as many responses when UDP is used! The 264 situation is better with TCP, but even if there was never a TCP 265 segment retransmitted, a single request from the client can generate 266 3 requests and four responses. Each server had to expend resources 267 to process these messages. Thus, more messages and more work were 268 sent into the network at the point at which the elements became 269 overloaded. The 503 mechanism works well when a single element is 270 overloaded. But, when the problem is overall network load, the 503 271 mechanism actually generates more messages and more work for all 272 servers, ultimately resulting in the rejection of the request anyway. 274 The problem becomes amplified further if one considers proxies 275 upstream from P1, as shown in Figure 3. 277 +------+ 278 > | | < 279 / | S1 | \\ 280 / | | \\ 281 / +------+ \\ 282 / \ 283 / \\ 284 / \\ 285 / \ 286 +------+ / +------+ +------+ 287 | | / | | | | 288 | P1 | ---------> | S2 |<----------| P2 | 289 | | \ | | | | 290 +------+ \ +------+ +------+ 291 ^ \ / ^ 292 \ \ // / 293 \ \ // / 294 \ \ // / 295 \ \ / / 296 \ \ +------+ // / 297 \ \ | | // / 298 \ > | S3 | < / 299 \ | | / 300 \ +------+ / 301 \ / 302 \ / 303 \ / 304 \ / 305 \ / 306 \ / 307 \ / 308 \ / 309 +------+ 310 | | 311 | PA | 312 | | 313 +------+ 314 ^ ^ 315 | | 316 | | 318 Figure 3 320 Here, proxy PA receives requests, and sends these to proxies P1 or 321 P2. P1 and P2 both load balance across S1 through S3. Assuming 322 again S1 through S3 are all overloaded, a request arrives at PA, 323 which tries P1 first. P1 tries S1, S2 and then S3, and each 324 transaction resulting in many request retransmits if UDP is used. 326 Since P1 is unable to eventually process the request, it rejects it. 327 However, since all of its downstream dependencies are busy, it 328 decides to send a 503. This propagates to PA, which tries P2, which 329 tries S1 through S3 again, resulting in a 503 once more. Thus, in 330 this case, we have doubled the number of SIP transactions and overall 331 work in the network compared to the previous case. The problem here 332 is that the fact that S1 through S3 were overloaded was known to P1, 333 but this information was not passed back to PA and through to P2, so 334 that P2 will retry S1 through S3 again. 336 4.2. Underutilization 338 Interestingly, there are also examples of deployments where the 339 network capacity was greatly reduced as a consequence of the overload 340 mechanism. Consider again Figure 2. Unfortunately, RFC 3261 is 341 unclear on the scope of a 503. When it is received by P1, does the 342 proxy cease sending requests to that IP address? To the hostname? 343 To the URI? Some implementations have chosen the hostname as the 344 scope. When the hostname for a URI points to an SRV record in the 345 DNS, which, in turn, maps to a cluster of downstream servers (S1, S2 346 and S3 in the example), a 503 response from a single one of them will 347 make the proxy believe that the entire cluster is overloaded. 348 Consequently, proxy P1 will cease sending any traffic to any element 349 in the cluster, even though there are elements in the cluster that 350 are underutilized. 352 4.3. The Off/On Retry-After Problem 354 The Retry-After mechanism allows a server to tell an upstream element 355 to stop sending traffic for a period of time. The work that would 356 have otherwise been sent to that server is instead sent to another 357 server. The mechanism is an all-or-nothing technique. A server can 358 turn off all traffic towards it, or none of it. There is nothing in 359 between. This tends to cause highly oscillatory behavior under even 360 mild overload. Consider a proxy P1 which is balancing requests 361 between two servers S1 and S2. The input load just reaches the point 362 where both S1 and S2 are at 100% capacity. A request arrives at P1, 363 and is sent to S1. S1 rejects this request with a 503 , and decides 364 to use Retry-After to clear its backlog. P1 stops sending all 365 traffic to S1. Now, S2 gets traffic, but it is seriously overloaded 366 - at 200% capacity! It decides to reject a request with a 503 and a 367 Retry-After, which now forces P1 to reject all traffic until S1's 368 Retry-After timer expires. At that point, all load is shunted back 369 to S1, which reaches overload, and the cycle repeats. 371 Its important to observe that this problem is only observed for 372 servers where there are a small number of upstream elements sending 373 it traffic, as is the case in these examples. If a proxy was 374 accessed by a large number of clients, each of which sends a small 375 amount of traffic, the 503 mechanism with Retry-After is quite 376 effective when utilized with a subset of the clients. This is 377 because spreading the 503 out amongst the clients has the effect of 378 providing the proxy more fine-grained controls on the amount of work 379 it receives. 381 4.4. Ambiguous Usages 383 Unfortunately, the specific instances under which a server is to send 384 a 503 are ambiguous. The result is that implementations generate 503 385 for many reasons, only some of which are related to actual overload. 386 For example, RFC 3398 [RFC3398], which specifies interworking from 387 SIP to ISUP, defines the usage of 503 when the gateway receives 388 certain ISUP cause codes from downstream switches. In these cases, 389 the gateway has ample capacity; its just that this specific request 390 could not be processed because of a downstream problem. 392 This causes two problems. Firstly, during periods of overload, it 393 exacerbates the problems above because it causes additional 503 to be 394 fed into the system, causing further work to be generated in 395 conditions of overload. The other problem is that it becomes hard 396 for an upstream element to know whether to retry when a 503 is 397 received. There are classes of failures where trying on another 398 server won't help, since the reason for the failure was that a common 399 downstream resource is unavailable. For example, if servers S1 and 400 S2 share a database, and the database fails. A request sent to S1 401 will result in a 503, but retrying on S2 won't help since the same 402 database is unavailable. 404 5. Solution Requirements 406 In this section, we propose requirements for an overload control 407 mechanism for SIP which addresses these problems. 409 REQ 1: The overload mechanism shall strive to maintain the overall 410 useful throughput (taking into consideration the quality-of- 411 service needs of the using applications) of a SIP server at 412 reasonable levels even when the incoming load on the network is 413 far in excess of its capacity. The overall throughput under load 414 is the ultimate measure of the value of an overload control 415 mechanism. 417 REQ 2: When a single network element fails, goes into overload, or 418 suffers from reduced processing capacity, the mechanism should 419 strive to limit the impact of this on other elements in the 420 network. This helps to prevent a small-scale failure from 421 becoming a widespread outage. 423 REQ 3: The mechanism should seek to minimize the amount of 424 configuration required in order to work. For example, it is 425 better to avoid needing to configure a server with its SIP message 426 throughput, as these kinds of quantities are hard to determine. 428 REQ 4: The mechanism must be capable of dealing with elements which 429 do not support it, so that a network can consist of a mix of ones 430 which do and don't support it. In other words, the mechanism 431 should not work only in environments where all elements support 432 it. It is reasonable to assume that it works better in such 433 environments, of course. Ideally, there should be incremental 434 improvements in overall network throughput as increasing numbers 435 of elements in the network support the mechanism. 437 REQ 5: The mechanism should not assume that it will only be deployed 438 in environments with completely trusted elements. It should seek 439 to operate as effectively as possible in environments where other 440 elements are malicious, including preventing malicious elements 441 from obtaining more than a fair share of service. 443 REQ 6: The mechanism shall provide a way to unambiguously inform an 444 upstream element that it is overloaded. Any response codes, 445 header fields, or other protocol machinery utilized for this 446 purpose shall be used exclusively for overload handling, and not 447 be used to indicate other failure conditions. This is meant to 448 avoid some of the problems that have arisen from the reuse of the 449 503 response code for multiple purposes. 451 REQ 7: The mechanism shall provide a way for an element to throttle 452 the amount of traffic it receives from an upstream element. This 453 throttling shall be graded, so that it is not all or nothing as 454 with the current 503 mechanism. This recognizes the fact that 455 "overload" is not a binary state, and there are degrees of 456 overload. 458 REQ 8: The mechanism shall ensure that, when a request has been 459 rejected from an overloaded element, it is not sent to another 460 element that is also overloaded. This requirement derives from 461 REQ 1. 463 REQ 9: That a request has been rejected from an overloaded element 464 shall not unduly restrict the ability of that request to be 465 submitted to and processed by an element that is not overloaded. 466 This requirement derives from REQ 1. 468 REQ 10: The mechanism should support servers that receive requests 469 from a large number of different upstream elements, where the set 470 of upstream elements is not enumerable. 472 REQ 11: The mechanism should support servers that receive requests 473 from a finite set of upstream elements, where the set of upstream 474 elements is enumerable. 476 REQ 12: The mechanism should work between servers in different 477 domains. 479 REQ 13: The mechanism must not dictate a specific algorithm for 480 prioritizing the processing of work within a proxy during times of 481 overload. It must permit a proxy to prioritize requests based on 482 any local policy, so that certain ones (such as a call for 483 emergency services or a call with a specific value of of the 484 Resource-Priority header field [RFC4412]) are processed ahead of 485 others. 487 REQ 14: The mechanism should provide unambigous directions to 488 clients on when they should retry a request, and when they should 489 not. This especially applies to TCP connection establishment and 490 SIP registrations, in order to mitigate against avalanche restart. 492 REQ 15: In cases where a network element fails, is so overloaded 493 that it cannot process messages, or cannot communicate due to a 494 network failure or network partition, it will not be able to 495 provide explicit indications of its levels of congestion. The 496 mechanism should properly function in these cases. 498 REQ 16: The mechanism should attempt to minimize the overhead of the 499 overload control messaging. 501 REQ 17: The overload mechanism must not provide an avenue for 502 malicious attack. 504 REQ 18: The overload mechanism should be unambiguous about whether a 505 load indication applies to a specific IP address, host, or URI, so 506 that an upstream element can determine the load of the entity to 507 which a request is to be sent. 509 REQ 19: The specification for the overload mechanism should give 510 guidance on which message types might be desirable to process over 511 others during times of overload, based on SIP-specific 512 considerations. For example, it may be more beneficial to process 513 a SUBSCRIBE refresh with Expires of zero than a SUBSCRIBE refresh 514 with a non-zero expiration, since the former reduces the overall 515 amount of load on the element, or to process re-INVITEs over new 516 INVITEs. 518 REQ 20: In a mixed environment of elements that do and do not 519 implement the overload mechanism, no disproportionate benefit 520 shall accrue to the users or operators of the elements that do not 521 implement the mechanism. 523 REQ 21: The overload mechanism should ensure that the system remains 524 stable. When the offered load drops from above the overall 525 capacity of the network to below the overall capacity, the 526 throughput should stabilize and become equal to the offered load. 528 REQ 22: It must be possible to disable the reporting of load 529 information towards upstream targets based on the identity of 530 those targets. This allows a domain administrator who considers 531 the load of their elements to be sensitive information, to 532 restrict access to that information. Of course, in such cases, 533 there is no expectation that the overload mechanism itself will 534 help prevent overload from that upstream target. 536 REQ 23: It must be possible for the overload mechanism to work in 537 cases where there is a load balancer in front of a farm of 538 proxies. 540 6. Security Considerations 542 Like all protocol mechanisms, a solution for overload handling must 543 prevent against malicious inside and outside attacks. This document 544 includes requirements for such security functions. 546 7. IANA Considerations 548 None. 550 8. Acknowledgements 552 The author would like to thank Steve Mayer, Mouli Chandramouli, 553 Robert Whent, Mark Perkins, Joe Stone, Vijay Gurbani, Steve Norreys, 554 Volker Hilt, Spencer Dawkins, and Dale Worley for their contributions 555 to this document. 557 9. Informative References 559 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 560 A., Peterson, J., Sparks, R., Handley, M., and E. 561 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 562 June 2002. 564 [RFC3398] Camarillo, G., Roach, A., Peterson, J., and L. Ong, 565 "Integrated Services Digital Network (ISDN) User Part 566 (ISUP) to Session Initiation Protocol (SIP) Mapping", 567 RFC 3398, December 2002. 569 [RFC4412] Schulzrinne, H. and J. Polk, "Communications Resource 570 Priority for the Session Initiation Protocol (SIP)", 571 RFC 4412, February 2006. 573 [I-D.ietf-sip-outbound] 574 Jennings, C. and R. Mahy, "Managing Client Initiated 575 Connections in the Session Initiation Protocol (SIP)", 576 draft-ietf-sip-outbound-10 (work in progress), July 2007. 578 Author's Address 580 Jonathan Rosenberg 581 Cisco 582 Edison, NJ 583 US 585 Email: jdrosen@cisco.com 586 URI: http://www.jdrosen.net 588 Full Copyright Statement 590 Copyright (C) The IETF Trust (2007). 592 This document is subject to the rights, licenses and restrictions 593 contained in BCP 78, and except as set forth therein, the authors 594 retain all their rights. 596 This document and the information contained herein are provided on an 597 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 598 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 599 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 600 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 601 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 602 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 604 Intellectual Property 606 The IETF takes no position regarding the validity or scope of any 607 Intellectual Property Rights or other rights that might be claimed to 608 pertain to the implementation or use of the technology described in 609 this document or the extent to which any license under such rights 610 might or might not be available; nor does it represent that it has 611 made any independent effort to identify any such rights. Information 612 on the procedures with respect to rights in RFC documents can be 613 found in BCP 78 and BCP 79. 615 Copies of IPR disclosures made to the IETF Secretariat and any 616 assurances of licenses to be made available, or the result of an 617 attempt made to obtain a general license or permission for the use of 618 such proprietary rights by implementers or users of this 619 specification can be obtained from the IETF on-line IPR repository at 620 http://www.ietf.org/ipr. 622 The IETF invites any interested party to bring to its attention any 623 copyrights, patents or patent applications, or other proprietary 624 rights that may cover technology that may be required to implement 625 this standard. Please address the information to the IETF at 626 ietf-ipr@ietf.org. 628 Acknowledgment 630 Funding for the RFC Editor function is provided by the IETF 631 Administrative Support Activity (IASA).