idnits 2.17.1 draft-ietf-sipping-overload-reqs-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 627. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 638. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 645. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 651. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 14, 2008) is 5762 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-20) exists of draft-ietf-sip-outbound-15 Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SIPPING J. Rosenberg 3 Internet-Draft Cisco 4 Intended status: Informational July 14, 2008 5 Expires: January 15, 2009 7 Requirements for Management of Overload in the Session Initiation 8 Protocol 9 draft-ietf-sipping-overload-reqs-05 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on January 15, 2009. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2008). 40 Abstract 42 Overload occurs in Session Initiation Protocol (SIP) networks when 43 proxies and user agents have insuffient resources to complete the 44 processing of a request. SIP provides limited support for overload 45 handling through its 503 response code, which tells an upstream 46 element that it is overloaded. However, numerous problems have been 47 identified with this mechanism. This draft summarizes the problems 48 with the existing 503 mechanism, and provides some requirements for a 49 solution. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 54 2. Causes of Overload . . . . . . . . . . . . . . . . . . . . . . 3 55 3. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 56 4. Current SIP Mechanisms . . . . . . . . . . . . . . . . . . . . 5 57 5. Problems with the Mechanism . . . . . . . . . . . . . . . . . 6 58 5.1. Load Amplification . . . . . . . . . . . . . . . . . . . . 6 59 5.2. Underutilization . . . . . . . . . . . . . . . . . . . . . 9 60 5.3. The Off/On Retry-After Problem . . . . . . . . . . . . . . 9 61 5.4. Ambiguous Usages . . . . . . . . . . . . . . . . . . . . . 10 62 6. Solution Requirements . . . . . . . . . . . . . . . . . . . . 10 63 7. Security Considerations . . . . . . . . . . . . . . . . . . . 13 64 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 65 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 13 66 10. Informative References . . . . . . . . . . . . . . . . . . . . 14 67 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 14 68 Intellectual Property and Copyright Statements . . . . . . . . . . 15 70 1. Introduction 72 Overload occurs in Session Initiation Protocol (SIP) [RFC3261] 73 networks when proxies and user agents have insuffient resources to 74 complete the processing of a request or a response. SIP provides 75 limited support for overload handling through its 503 response code. 76 This code allows a server to tell an upstream element that it is 77 overloaded. However, numerous problems have been identified with 78 this mechanism. 80 This draft describes the general problem of SIP overload, and then 81 reviews the current SIP mechanisms for dealing with overload. It 82 then explains some of the problems with these mechanisms. Finally, 83 the document provides a set of requirements for fixing these 84 problems. 86 2. Causes of Overload 88 Overload occurs when an element, such as a SIP user agent or proxy, 89 has insufficient resources to successfully process all of the traffic 90 it is receiving. Resources include all of the capabilities of the 91 element used to process a request, including CPU processing, memory, 92 I/O, or disk resources. It can also include external resources, such 93 as a database or DNS server, in which case the CPU, processing, 94 memory, I/O and disk resources of those servers are effectively part 95 of the logical element processing the request. Overload can occur 96 for many reasons, including: 98 Poor Capacity Planning: SIP networks need to be designed with 99 sufficient numbers of servers, hardware, disks, and so on, in 100 order to meet the needs of the subscribers they are expected to 101 serve. Capacity planning is the process of determining these 102 needs. It is based on the number of expected subscribers and the 103 types of flows they are expected to use. If this work is not done 104 properly, the network may have insufficient capacity to handle 105 predictable usages, including regular usages and predictably high 106 ones (such as high voice calling volumes on Mothers Day). 108 Dependency Failures: A SIP element can become overloaded because a 109 resource on which it is dependent has failed or become overloaded, 110 greatly reducing the logical capacity of the element. In these 111 cases, even minimal traffic might cause the server to go into 112 overload. Examples of such dependency overloads include DNS 113 servers, databases, disks and network interfaces. 115 Component Failures: A SIP element can become overloaded when it is a 116 member of a cluster of servers which each share the load of 117 traffic, and one or more of the other members in the cluster fail. 118 In this case, the remaining elements take over the work of the 119 failed elements. Normally, capacity planning takes such failures 120 into account, and servers are typically run with enough spare 121 capacity to handle failure of another element. However, unusual 122 failure conditions can cause many elements to fail at once. This 123 is often the case with software failures, where a bad packet or 124 bad database entry hits the same bug in a set of elements in a 125 cluster. 127 Avalanche Restart: One of the most troubling sources of overload is 128 avalanche restart. This happens when a large number of clients 129 all simultaneously attempt to connect to the network with a SIP 130 registration. Avalanche restart can be caused by several events. 131 One is the "Manhattan Reboots" scenario, where there is a power 132 failure in a large metropolitan area, such as Manhattan. When 133 power is restored, all of the SIP phones, whether in PCs or 134 standalone devices, simultaneously power on and begin booting. 135 They will all then connect to the network and register, causing a 136 flood of SIP REGISTER messages. Another cause of avalanche 137 restart is failure of a large network connection, for example, the 138 access router for an enterprise. When it fails, SIP clients will 139 detect the failure rapidly using the mechanisms in 140 [I-D.ietf-sip-outbound]. When connectivity is restored, this is 141 detected, and clients re-REGISTER, all within a short time period. 142 Another source of avalanche restart is failure of a proxy server. 143 If clients had all connected to the server with TCP, its failure 144 will be detected, followed by re-connection and re-registration to 145 another server. Note that [I-D.ietf-sip-outbound] does provide 146 some remedies to this case. 148 Flash Crowds: A flash crowd occurs when an extremely large number of 149 users all attempt to simultaneously make a call. One example of 150 how this can happen is a television commercial that advertises a 151 number to call to receive a free gift. If the gift is compelling 152 and many people see the ad, many calls can be simultaneously made 153 to the same number. This can send the system into overload. 155 Denial of Service (DoS) Attacks: An attacker, wishing to disrupt 156 service in the network, can cause a large amount of traffic to be 157 launched at a target server. This can be done from a central 158 source of traffic, or through a distributed DoS attack. In all 159 cases, the volume of traffic well exceeds the capacity of the 160 server, sending into overload. 162 Unfortunately, the overload problem tends to compound itself. When a 163 network goes into overload, this can frequently cause failures of the 164 elements that are trying to process the traffic. This causes even 165 more load on the remaining elements. Furthermore, during overload, 166 the overall capacity of functional elements goes down, since much of 167 their resources are spent just rejecting or treating load that they 168 cannot actually process. In addition, overload tends to cause SIP 169 messages to be delayed or lost, which causes retransmissions to be 170 sent, further increasing the amount of work in the network. This 171 compounding factor can produce substantial multipliers on the load in 172 the system. Indeed, in the case of UDP, with as many as 7 173 retransmits of an INVITE request prior to timeout, overload can 174 multiply the already-heavy message volume by as much as seven! 176 3. Terminology 178 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 179 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 180 document are to be interpreted as described in RFC 2119 [RFC2119]. 182 4. Current SIP Mechanisms 184 SIP provides very basic support for overload. It defines the 503 185 response code, which is sent by an element that is overloaded. RFC 186 3261 defines it thusly: 188 The server is temporarily unable to process the request due to 189 a temporary overloading or maintenance of the server. The 190 server MAY indicate when the client should retry the request in 191 a Retry-After header field. If no Retry-After is given, the 192 client MUST act as if it had received a 500 (Server Internal 193 Error) response. 195 A client (proxy or UAC) receiving a 503 (Service Unavailable) 196 SHOULD attempt to forward the request to an alternate server. 197 It SHOULD NOT forward any other requests to that server for the 198 duration specified in the Retry-After header field, if present. 200 Servers MAY refuse the connection or drop the request instead of 201 responding with 503 (Service Unavailable). 203 The objective is to provide a mechanism to move the work of the 204 overloaded server to another server, so that the request can be 205 processed. The Retry-After header field, when present, is meant to 206 allow a server to tell an upstream element to back off for a period 207 of time, so that the overloaded server can work through its backlog 208 of work. 210 RFC3261 also instructs proxies to not forward 503 responses upstream, 211 at SHOULD NOT strength. This is to avoid the upstream server of 212 mistakingly concluding that the proxy is overloaded, when in fact the 213 problem was an element further downstream. 215 5. Problems with the Mechanism 217 At the surface, the 503 mechanism seems workable. Unfortunately, 218 this mechanism has had numerous problems in actual deployment. These 219 problems are described here. 221 5.1. Load Amplification 223 The principal problem with the 503 mechanism is that it tends to 224 substantially amplify the load in the network when the network is 225 overloaded, causing further escalation of the problem and introducing 226 the very real possibility of congestive collapse. Consider the 227 topology in Figure 2. 229 +------+ 230 > | | 231 / | S1 | 232 / | | 233 / +------+ 234 / 235 / 236 / 237 / 238 +------+ / +------+ 239 --------> | |/ | | 240 | P1 |---------> | S2 | 241 --------> | |\ | | 242 +------+ \ +------+ 243 \ 244 \ 245 \ 246 \ 247 \ 248 \ +------+ 249 \ | | 250 > | S3 | 251 | | 252 +------+ 254 Figure 2 256 Proxy P1 receives SIP requests from many sources, and acts solely as 257 a load balancer, proxying the requests to servers S1, S2 and S3 for 258 processing. The input load increases to the point where all three 259 servers become overloaded. Server S1, when it receives its next 260 request, generates a 503. However, because the server is loaded, it 261 might take some time to generate the 503. If SIP is being run over 262 UDP, this may result in request retransmissions which further 263 increase the work on S1. Even in the case of TCP, if the server is 264 loaded and the kernel cannot send TCP acknowledgements fast enough, 265 TCP retransmits may occur. When the 503 is received by P1, it 266 retries the request on S2. S2 is also overloaded, and eventually 267 generates a 503, but in the interim may also be hit with retransmits. 268 P1 once again tries another server, this time S3, which also 269 eventually rejects it with a 503. 271 Thus, the processing of this request, which ultimately failed, 272 involved four SIP transactions (client to P1, P1 to S1, P1 to S2, P1 273 to S3), each of which may have involved many retransmissions - up to 274 7 in the case of UDP. Thus, under unloaded conditions, a single 275 request from a client would generate one request (to S1, S2 or S3) 276 and two responses (from S1 to P1, then P1 to the client). When the 277 network is overloaded, a single request from the client, before 278 timing out, could generate as many as 18 requests and as many 279 responses when UDP is used! The situation is better with TCP (or any 280 reliable transport in general), but even if there was never a TCP 281 segment retransmitted, a single request from the client can generate 282 3 requests and four responses. Each server had to expend resources 283 to process these messages. Thus, more messages and more work were 284 sent into the network at the point at which the elements became 285 overloaded. The 503 mechanism works well when a single element is 286 overloaded. But, when the problem is overall network load, the 503 287 mechanism actually generates more messages and more work for all 288 servers, ultimately resulting in the rejection of the request anyway. 290 The problem becomes amplified further if one considers proxies 291 upstream from P1, as shown in Figure 3. 293 +------+ 294 > | | < 295 / | S1 | \\ 296 / | | \\ 297 / +------+ \\ 298 / \ 299 / \\ 300 / \\ 301 / \ 302 +------+ / +------+ +------+ 303 | | / | | | | 304 | P1 | ---------> | S2 |<----------| P2 | 305 | | \ | | | | 306 +------+ \ +------+ +------+ 307 ^ \ / ^ 308 \ \ // / 309 \ \ // / 310 \ \ // / 311 \ \ / / 312 \ \ +------+ // / 313 \ \ | | // / 314 \ > | S3 | < / 315 \ | | / 316 \ +------+ / 317 \ / 318 \ / 319 \ / 320 \ / 321 \ / 322 \ / 323 \ / 324 \ / 325 +------+ 326 | | 327 | PA | 328 | | 329 +------+ 330 ^ ^ 331 | | 332 | | 334 Figure 3 336 Here, proxy PA receives requests, and sends these to proxies P1 or 337 P2. P1 and P2 both load balance across S1 through S3. Assuming 338 again S1 through S3 are all overloaded, a request arrives at PA, 339 which tries P1 first. P1 tries S1, S2 and then S3, and each 340 transaction resulting in many request retransmits if UDP is used. 342 Since P1 is unable to eventually process the request, it rejects it. 343 However, since all of its downstream dependencies are busy, it 344 decides to send a 503. This propagates to PA, which tries P2, which 345 tries S1 through S3 again, resulting in a 503 once more. Thus, in 346 this case, we have doubled the number of SIP transactions and overall 347 work in the network compared to the previous case. The problem here 348 is that the fact that S1 through S3 were overloaded was known to P1, 349 but this information was not passed back to PA and through to P2, so 350 that P2 will retry S1 through S3 again. 352 5.2. Underutilization 354 Interestingly, there are also examples of deployments where the 355 network capacity was greatly reduced as a consequence of the overload 356 mechanism. Consider again Figure 2. Unfortunately, RFC 3261 is 357 unclear on the scope of a 503. When it is received by P1, does the 358 proxy cease sending requests to that IP address? To the hostname? 359 To the URI? Some implementations have chosen the hostname as the 360 scope. When the hostname for a URI points to an SRV record in the 361 DNS, which, in turn, maps to a cluster of downstream servers (S1, S2 362 and S3 in the example), a 503 response from a single one of them will 363 make the proxy believe that the entire cluster is overloaded. 364 Consequently, proxy P1 will cease sending any traffic to any element 365 in the cluster, even though there are elements in the cluster that 366 are underutilized. 368 5.3. The Off/On Retry-After Problem 370 The Retry-After mechanism allows a server to tell an upstream element 371 to stop sending traffic for a period of time. The work that would 372 have otherwise been sent to that server is instead sent to another 373 server. The mechanism is an all-or-nothing technique. A server can 374 turn off all traffic towards it, or none of it. There is nothing in 375 between. This tends to cause highly oscillatory behavior under even 376 mild overload. Consider a proxy P1 which is balancing requests 377 between two servers S1 and S2. The input load just reaches the point 378 where both S1 and S2 are at 100% capacity. A request arrives at P1, 379 and is sent to S1. S1 rejects this request with a 503 , and decides 380 to use Retry-After to clear its backlog. P1 stops sending all 381 traffic to S1. Now, S2 gets traffic, but it is seriously overloaded 382 - at 200% capacity! It decides to reject a request with a 503 and a 383 Retry-After, which now forces P1 to reject all traffic until S1's 384 Retry-After timer expires. At that point, all load is shunted back 385 to S1, which reaches overload, and the cycle repeats. 387 Its important to observe that this problem is only observed for 388 servers where there are a small number of upstream elements sending 389 it traffic, as is the case in these examples. If a proxy was 390 accessed by a large number of clients, each of which sends a small 391 amount of traffic, the 503 mechanism with Retry-After is quite 392 effective when utilized with a subset of the clients. This is 393 because spreading the 503 out amongst the clients has the effect of 394 providing the proxy more fine-grained controls on the amount of work 395 it receives. 397 5.4. Ambiguous Usages 399 Unfortunately, the specific instances under which a server is to send 400 a 503 are ambiguous. The result is that implementations generate 503 401 for many reasons, only some of which are related to actual overload. 402 For example, RFC 3398 [RFC3398], which specifies interworking from 403 SIP to ISUP, defines the usage of 503 when the gateway receives 404 certain ISUP cause codes from downstream switches. In these cases, 405 the gateway has ample capacity; its just that this specific request 406 could not be processed because of a downstream problem. All 407 subsequent requests might succeed if they take a different route in 408 the PSTN. 410 This causes two problems. Firstly, during periods of overload, it 411 exacerbates the problems above because it causes additional 503 to be 412 fed into the system, causing further work to be generated in 413 conditions of overload. The other problem is that it becomes hard 414 for an upstream element to know whether to retry when a 503 is 415 received. There are classes of failures where trying on another 416 server won't help, since the reason for the failure was that a common 417 downstream resource is unavailable. For example, if servers S1 and 418 S2 share a database, and the database fails. A request sent to S1 419 will result in a 503, but retrying on S2 won't help since the same 420 database is unavailable. 422 6. Solution Requirements 424 In this section, we propose requirements for an overload control 425 mechanism for SIP which addresses these problems. 427 REQ 1: The overload mechanism shall strive to maintain the overall 428 useful throughput (taking into consideration the quality-of- 429 service needs of the using applications) of a SIP server at 430 reasonable levels even when the incoming load on the network is 431 far in excess of its capacity. The overall throughput under load 432 is the ultimate measure of the value of an overload control 433 mechanism. 435 REQ 2: When a single network element fails, goes into overload, or 436 suffers from reduced processing capacity, the mechanism should 437 strive to limit the impact of this on other elements in the 438 network. This helps to prevent a small-scale failure from 439 becoming a widespread outage. 441 REQ 3: The mechanism should seek to minimize the amount of 442 configuration required in order to work. For example, it is 443 better to avoid needing to configure a server with its SIP message 444 throughput, as these kinds of quantities are hard to determine. 446 REQ 4: The mechanism must be capable of dealing with elements which 447 do not support it, so that a network can consist of a mix of ones 448 which do and don't support it. In other words, the mechanism 449 should not work only in environments where all elements support 450 it. It is reasonable to assume that it works better in such 451 environments, of course. Ideally, there should be incremental 452 improvements in overall network throughput as increasing numbers 453 of elements in the network support the mechanism. 455 REQ 5: The mechanism should not assume that it will only be deployed 456 in environments with completely trusted elements. It should seek 457 to operate as effectively as possible in environments where other 458 elements are malicious, including preventing malicious elements 459 from obtaining more than a fair share of service. 461 REQ 6: When overload is signaled by means of a specific message, the 462 message must clearly indicate that it is being sent because of 463 overload, as opposed to other, non-overload based failure 464 conditions. This requirement is meant to avoid some of the 465 problems that have arisen from the reuse of the 503 response code 466 for multiple purposes. Of course, overload is also signaled by 467 lack of response to requests. This requirement applies only to 468 explicit overload signals. 470 REQ 7: The mechanism shall provide a way for an element to throttle 471 the amount of traffic it receives from an upstream element. This 472 throttling shall be graded, so that it is not all or nothing as 473 with the current 503 mechanism. This recognizes the fact that 474 "overload" is not a binary state, and there are degrees of 475 overload. 477 REQ 8: The mechanism shall ensure that, when a request was not 478 processed successfully due to overload (or failure) of a 479 downstream element, the request will not be retried on another 480 element which is also overloaded or whose status is unknown. This 481 requirement derives from REQ 1. 483 REQ 9: That a request has been rejected from an overloaded element 484 shall not unduly restrict the ability of that request to be 485 submitted to and processed by an element that is not overloaded. 486 This requirement derives from REQ 1. 488 REQ 10: The mechanism should support servers that receive requests 489 from a large number of different upstream elements, where the set 490 of upstream elements is not enumerable. 492 REQ 11: The mechanism should support servers that receive requests 493 from a finite set of upstream elements, where the set of upstream 494 elements is enumerable. 496 REQ 12: The mechanism should work between servers in different 497 domains. 499 REQ 13: The mechanism must not dictate a specific algorithm for 500 prioritizing the processing of work within a proxy during times of 501 overload. It must permit a proxy to prioritize requests based on 502 any local policy, so that certain ones (such as a call for 503 emergency services or a call with a specific value of the 504 Resource-Priority header field [RFC4412]) are given preferential 505 treatment,such as not being dropped, being given additional 506 retransmission, or being processed ahead of others. 508 REQ 14: The mechanism should provide unambigous directions to 509 clients on when they should retry a request, and when they should 510 not. This especially applies to TCP connection establishment and 511 SIP registrations, in order to mitigate against avalanche restart. 513 REQ 15: In cases where a network element fails, is so overloaded 514 that it cannot process messages, or cannot communicate due to a 515 network failure or network partition, it will not be able to 516 provide explicit indications of the nature of the failure or its 517 levels of congestion. The mechanism must properly function in 518 these cases. 520 REQ 16: The mechanism should attempt to minimize the overhead of the 521 overload control messaging. 523 REQ 17: The overload mechanism must not provide an avenue for 524 malicious attack, including DoS and DDoS attacks. 526 REQ 18: The overload mechanism should be unambiguous about whether a 527 load indication applies to a specific IP address, host, or URI, so 528 that an upstream element can determine the load of the entity to 529 which a request is to be sent. 531 REQ 19: The specification for the overload mechanism should give 532 guidance on which message types might be desirable to process over 533 others during times of overload, based on SIP-specific 534 considerations. For example, it may be more beneficial to process 535 a SUBSCRIBE refresh with Expires of zero than a SUBSCRIBE refresh 536 with a non-zero expiration, since the former reduces the overall 537 amount of load on the element, or to process re-INVITEs over new 538 INVITEs. 540 REQ 20: In a mixed environment of elements that do and do not 541 implement the overload mechanism, no disproportionate benefit 542 shall accrue to the users or operators of the elements that do not 543 implement the mechanism. 545 REQ 21: The overload mechanism should ensure that the system remains 546 stable. When the offered load drops from above the overall 547 capacity of the network to below the overall capacity, the 548 throughput should stabilize and become equal to the offered load. 550 REQ 22: It must be possible to disable the reporting of load 551 information towards upstream targets based on the identity of 552 those targets. This allows a domain administrator who considers 553 the load of their elements to be sensitive information, to 554 restrict access to that information. Of course, in such cases, 555 there is no expectation that the overload mechanism itself will 556 help prevent overload from that upstream target. 558 REQ 23: It must be possible for the overload mechanism to work in 559 cases where there is a load balancer in front of a farm of 560 proxies. 562 7. Security Considerations 564 Like all protocol mechanisms, a solution for overload handling must 565 prevent against malicious inside and outside attacks. This document 566 includes requirements for such security functions. 568 8. IANA Considerations 570 None. 572 9. Acknowledgements 574 The author would like to thank Steve Mayer, Mouli Chandramouli, 575 Robert Whent, Mark Perkins, Joe Stone, Vijay Gurbani, Steve Norreys, 576 Volker Hilt, Spencer Dawkins, Matt Mathis, Juergen Schoenwaelder, and 577 Dale Worley for their contributions to this document. 579 10. Informative References 581 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 582 A., Peterson, J., Sparks, R., Handley, M., and E. 583 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 584 June 2002. 586 [RFC3398] Camarillo, G., Roach, A., Peterson, J., and L. Ong, 587 "Integrated Services Digital Network (ISDN) User Part 588 (ISUP) to Session Initiation Protocol (SIP) Mapping", 589 RFC 3398, December 2002. 591 [RFC4412] Schulzrinne, H. and J. Polk, "Communications Resource 592 Priority for the Session Initiation Protocol (SIP)", 593 RFC 4412, February 2006. 595 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 596 Requirement Levels", BCP 14, RFC 2119, March 1997. 598 [I-D.ietf-sip-outbound] 599 Jennings, C. and R. Mahy, "Managing Client Initiated 600 Connections in the Session Initiation Protocol (SIP)", 601 draft-ietf-sip-outbound-15 (work in progress), June 2008. 603 Author's Address 605 Jonathan Rosenberg 606 Cisco 607 Edison, NJ 608 US 610 Email: jdrosen@cisco.com 611 URI: http://www.jdrosen.net 613 Full Copyright Statement 615 Copyright (C) The IETF Trust (2008). 617 This document is subject to the rights, licenses and restrictions 618 contained in BCP 78, and except as set forth therein, the authors 619 retain all their rights. 621 This document and the information contained herein are provided on an 622 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 623 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 624 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 625 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 626 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 627 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 629 Intellectual Property 631 The IETF takes no position regarding the validity or scope of any 632 Intellectual Property Rights or other rights that might be claimed to 633 pertain to the implementation or use of the technology described in 634 this document or the extent to which any license under such rights 635 might or might not be available; nor does it represent that it has 636 made any independent effort to identify any such rights. Information 637 on the procedures with respect to rights in RFC documents can be 638 found in BCP 78 and BCP 79. 640 Copies of IPR disclosures made to the IETF Secretariat and any 641 assurances of licenses to be made available, or the result of an 642 attempt made to obtain a general license or permission for the use of 643 such proprietary rights by implementers or users of this 644 specification can be obtained from the IETF on-line IPR repository at 645 http://www.ietf.org/ipr. 647 The IETF invites any interested party to bring to its attention any 648 copyrights, patents or patent applications, or other proprietary 649 rights that may cover technology that may be required to implement 650 this standard. Please address the information to the IETF at 651 ietf-ipr@ietf.org. 653 Acknowledgment 655 Funding for the RFC Editor function is provided by the IETF 656 Administrative Support Activity (IASA).