idnits 2.17.1 draft-ietf-dime-overload-reqs-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 6, 2013) is 3970 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group E. McMurry 3 Internet-Draft B. Campbell 4 Intended status: Informational Tekelec 5 Expires: December 8, 2013 June 6, 2013 7 Diameter Overload Control Requirements 8 draft-ietf-dime-overload-reqs-07 10 Abstract 12 When a Diameter server or agent becomes overloaded, it needs to be 13 able to gracefully reduce its load, typically by informing clients to 14 reduce sending traffic for some period of time. Otherwise, it must 15 continue to expend resources parsing and responding to Diameter 16 messages, possibly resulting in congestion collapse. The existing 17 Diameter mechanisms, listed in Section 3 are not sufficient for this 18 purpose. This document describes the limitations of the existing 19 mechanisms in Section 4. Requirements for new overload management 20 mechanisms are provided in Section 7. 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on December 8, 2013. 39 Copyright Notice 41 Copyright (c) 2013 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 57 1.1. Causes of Overload . . . . . . . . . . . . . . . . . . . . 3 58 1.2. Effects of Overload . . . . . . . . . . . . . . . . . . . 5 59 1.3. Overload vs. Network Congestion . . . . . . . . . . . . . 5 60 1.4. Diameter Applications in a Broader Network . . . . . . . . 5 61 1.5. Documentation Conventions . . . . . . . . . . . . . . . . 6 62 2. Overload Scenarios . . . . . . . . . . . . . . . . . . . . . . 6 63 2.1. Peer to Peer Scenarios . . . . . . . . . . . . . . . . . . 7 64 2.2. Agent Scenarios . . . . . . . . . . . . . . . . . . . . . 9 65 2.3. Interconnect Scenario . . . . . . . . . . . . . . . . . . 12 66 3. Existing Mechanisms . . . . . . . . . . . . . . . . . . . . . 13 67 4. Issues with the Current Mechanisms . . . . . . . . . . . . . . 14 68 4.1. Problems with Implicit Mechanism . . . . . . . . . . . . . 15 69 4.2. Problems with Explicit Mechanisms . . . . . . . . . . . . 15 70 5. Diameter Overload Case Studies . . . . . . . . . . . . . . . . 16 71 5.1. Overload in Mobile Data Networks . . . . . . . . . . . . . 16 72 5.2. 3GPP Study on Core Network Overload . . . . . . . . . . . 17 73 6. Extensibility and Application Independence . . . . . . . . . . 18 74 7. Solution Requirements . . . . . . . . . . . . . . . . . . . . 18 75 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 76 9. Security Considerations . . . . . . . . . . . . . . . . . . . 23 77 9.1. Access Control . . . . . . . . . . . . . . . . . . . . . . 24 78 9.2. Denial-of-Service Attacks . . . . . . . . . . . . . . . . 24 79 9.3. Replay Attacks . . . . . . . . . . . . . . . . . . . . . . 24 80 9.4. Man-in-the-Middle Attacks . . . . . . . . . . . . . . . . 25 81 9.5. Compromised Hosts . . . . . . . . . . . . . . . . . . . . 25 82 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 25 83 10.1. Normative References . . . . . . . . . . . . . . . . . . . 25 84 10.2. Informative References . . . . . . . . . . . . . . . . . . 26 85 Appendix A. Contributors . . . . . . . . . . . . . . . . . . . . 26 86 Appendix B. Acknowledgements . . . . . . . . . . . . . . . . . . 27 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 27 89 1. Introduction 91 When a Diameter [RFC6733] server or agent becomes overloaded, it 92 needs to be able to gracefully reduce its load, typically by 93 informing clients to reduce sending traffic for some period of time. 94 Otherwise, it must continue to expend resources parsing and 95 responding to Diameter messages, possibly resulting in congestion 96 collapse. The existing mechanisms provided by Diameter are not 97 sufficient for this purpose. This document describes the limitations 98 of the existing mechanisms, and provides requirements for new 99 overload management mechanisms. 101 This document draws on the work done on SIP overload control 102 ([RFC5390], [RFC6357]) as well as on experience gained via overload 103 handling in Signaling System No. 7 (SS7) networks and studies done by 104 the Third Generation Partnership Project (3GPP) (Section 5). 106 Diameter is not typically an end-user protocol; rather it is 107 generally used as one component in support of some end-user activity. 109 For example, a SIP server might use Diameter to authenticate and 110 authorize user access. Overload in the Diameter backend 111 infrastructure will likely impact the experience observed by the end 112 user in the SIP application. 114 The impact of Diameter overload on the client application (a client 115 application may use the Diameter protocol and other protocols to do 116 its job) is beyond the scope of this document. 118 This document presents non-normative descriptions of causes of 119 overload along with related scenarios and studies. Finally, it 120 offers a set of normative requirements for an improved overload 121 indication mechanism. 123 1.1. Causes of Overload 125 Overload occurs when an element, such as a Diameter server or agent, 126 has insufficient resources to successfully process all of the traffic 127 it is receiving. Resources include all of the capabilities of the 128 element used to process a request, including CPU processing, memory, 129 I/O, and disk resources. It can also include external resources such 130 as a database or DNS server, in which case the CPU, processing, 131 memory, I/O, and disk resources of those elements are effectively 132 part of the logical element processing the request. 134 Overload can occur for many reasons, including: 136 Inadequate capacity: When designing Diameter networks, that is, 137 application layer multi-node Diameter deployments, it can be very 138 difficult to predict all scenarios that may cause elevated 139 traffic. It may also be more costly to implement support for some 140 scenarios than a network operator may deem worthwhile. This 141 results in the likelihood that a Diameter network will not have 142 adequate capacity to handle all situations. 144 Dependency failures: A Diameter node can become overloaded because a 145 resource on which it is dependent has failed or become overloaded, 146 greatly reducing the logical capacity of the node. In these 147 cases, even minimal traffic might cause the node to go into 148 overload. Examples of such dependency overloads include DNS 149 servers, databases, disks, and network interfaces. 151 Component failures: A Diameter node can become overloaded when it is 152 a member of a cluster of servers that each share the load of 153 traffic, and one or more of the other members in the cluster fail. 154 In this case, the remaining nodes take over the work of the failed 155 nodes. Normally, capacity planning takes such failures into 156 account, and servers are typically run with enough spare capacity 157 to handle failure of another node. However, unusual failure 158 conditions can cause many nodes to fail at once. This is often 159 the case with software failures, where a bad packet or bad 160 database entry hits the same bug in a set of nodes in a cluster. 162 Network Initiated Traffic Flood: Issues with the radio access 163 network in a mobile network such as radio overlays with frequent 164 handovers, and operational changes are examples of network events 165 that can precipitate a flood of Diameter signaling traffic, such 166 as an avalanche restart. Failure of a Diameter proxy may also 167 result in a large amount of signaling as connections and sessions 168 are reestablished. 170 Subscriber Initiated Traffic Flood: Large gatherings of subscribers 171 or events that result in many subscribers interacting with the 172 network in close time proximity can result in Diameter signaling 173 traffic floods. For example, the finale of a large fireworks show 174 could be immediately followed by many subscribers posting 175 messages, pictures, and videos concentrated on one portion of a 176 network. Subscriber devices, such as smartphones, may use 177 aggressive registration strategies that generate unusually high 178 Diameter traffic loads. 180 DoS attacks: An attacker, wishing to disrupt service in the network, 181 can cause a large amount of traffic to be launched at a target 182 element. This can be done from a central source of traffic or 183 through a distributed DoS attack. In all cases, the volume of 184 traffic well exceeds the capacity of the element, sending the 185 system into overload. 187 1.2. Effects of Overload 189 Modern Diameter networks, comprised of application layer multi-node 190 deployments of Diameter elements, may operate at very large 191 transaction volumes. If a Diameter node becomes overloaded, or even 192 worse, fails completely, a large number of messages may be lost very 193 quickly. Even with redundant servers, many messages can be lost in 194 the time it takes for failover to complete. While a Diameter client 195 or agent should be able to retry such requests, an overloaded peer 196 may cause a sudden large increase in the number of transaction 197 transactions needing to be retried, rapidly filling local queues or 198 otherwise contributing to local overload. Therefore Diameter devices 199 need to be able to shed load before critical failures can occur. 201 1.3. Overload vs. Network Congestion 203 This document uses the term "overload" to refer to application-layer 204 overload at Diameter nodes. This is distinct from "network 205 congestion", that is, congestion that occurs at the lower networking 206 layers that may impact the delivery of Diameter messages between 207 nodes. The authors recognize that element overload and network 208 congestion are interrelated, and that overload can contribute to 209 network congestion and vice versa. 211 Network congestion issues are better handled by the transport 212 protocols. Diameter uses TCP and SCTP, both of which include 213 congestion management features. Analysis of whether those features 214 are sufficient for transport level congestion between Diameter nodes, 215 and any work to further mitigate network congestion is out of scope 216 both for this document, and for the work proposed by this document. 218 1.4. Diameter Applications in a Broader Network 220 Most elements using Diameter applications do not use Diameter 221 exclusively. It is important to realize that overload of an element 222 can be caused by a number of factors that may be unrelated to the 223 processing of Diameter or Diameter applications. 225 A element communicating via protocols other than Diameter that is 226 also using a Diameter application needs to be able to signal to 227 Diameter peers that it is experiencing overload regardless of the 228 cause of the overload, since the overload will affect that element's 229 ability to process Diameter transactions. The element may also need 230 to signal this on other protocols depending on its function and the 231 architecture of the network and application it is providing services 232 for. Whether that is necessary can only be decided within the 233 context of that architecture and application. A mechanism for 234 signaling overload with Diameter, which this specification details 235 the requirements for, provides applications the ability to signal 236 their Diameter peers of overload, mitigating that part of the issue. 237 Applications may need to use this, as well as other mechanisms, to 238 solve their broader overload issues. Indicating overload on 239 protocols other than Diameter is out of scope for this document, and 240 for the work proposed by this document. 242 1.5. Documentation Conventions 244 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 245 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 246 document are to be interpreted as described in [RFC2119]. 248 The terms "client", "server", "agent", "node", "peer", "upstream", 249 and "downstream" are used as defined in [RFC6733]. 251 2. Overload Scenarios 253 Several Diameter deployment scenarios exist that may impact overload 254 management. The following scenarios help motivate the requirements 255 for an overload management mechanism. 257 These scenarios are by no means exhaustive, and are in general 258 simplified for the sake of clarity. In particular, the authors 259 assume for the sake of clarity that the client sends Diameter 260 requests to the server, and the server sends responses to client, 261 even though Diameter supports bidirectional applications. Each 262 direction in such an application can be modeled separately. 264 In a large scale deployment, many of the nodes represented in these 265 scenarios would be deployed as clusters of servers. The authors 266 assume that such a cluster is responsible for managing its own 267 internal load balancing and overload management so that it appears as 268 a single Diameter node. That is, other Diameter nodes can treat it 269 as single, monolithic node for the purposes of overload management. 271 These scenarios do not illustrate the client application. As 272 mentioned in Section 1, Diameter is not typically an end-user 273 protocol; rather it is generally used in support of some other client 274 application. These scenarios do not consider the impact of Diameter 275 overload on the client application. 277 2.1. Peer to Peer Scenarios 279 This section describes Diameter peer-to-peer scenarios. That is, 280 scenarios where a Diameter client talks directly with a Diameter 281 server, without the use of a Diameter agent. 283 Figure 1 illustrates the simplest possible Diameter relationship. 284 The client and server share a one-to-one peer-to-peer relationship. 285 If the server becomes overloaded, either because the client exceeds 286 the server's capacity, or because the server's capacity is reduced 287 due to some resource dependency, the client needs to reduce the 288 amount of Diameter traffic it sends to the server. Since the client 289 cannot forward requests to another server, it must either queue 290 requests until the server recovers, or itself become overloaded in 291 the context of the client application and other protocols it may also 292 use. 294 +------------------+ 295 | | 296 | | 297 | Server | 298 | | 299 +--------+---------+ 300 | 301 | 302 +--------+---------+ 303 | | 304 | | 305 | Client | 306 | | 307 +------------------+ 309 Figure 1: Basic Peer to Peer Scenario 311 Figure 2 shows a similar scenario, except in this case the client has 312 multiple servers that can handle work for a specific realm and 313 application. If server 1 becomes overloaded, the client can forward 314 traffic to server 2. Assuming server 2 has sufficient reserve 315 capacity to handle the forwarded traffic, the client should be able 316 to continue serving client application protocol users. If server 1 317 is approaching overload, but can still handle some number of new 318 request, it needs to be able to instruct the client to forward a 319 subset of its traffic to server 2. 321 +------------------+ +------------------+ 322 | | | | 323 | | | | 324 | Server 1 | | Server 2 | 325 | | | | 326 +--------+-`.------+ +------.'+---------+ 327 `. .' 328 `. .' 329 `. .' 330 `. .' 331 +-------`.'--------+ 332 | | 333 | | 334 | Client | 335 | | 336 +------------------+ 338 Figure 2: Multiple Server Peer to Peer Scenario 340 Figure 3 illustrates a peer-to-peer scenario with multiple Diameter 341 realm and application combinations. In this example, server 2 can 342 handle work for both applications. Each application might have 343 different resource dependencies. For example, a server might need to 344 access one database for application A, and another for application B. 345 This creates a possibility that Server 2 could become overloaded for 346 application A but not for application B, in which case the client 347 would need to divert some part of its application A requests to 348 server 1, but should not divert any application B requests. This 349 requires server 2 to be able to distinguish between applications when 350 it indicates an overload condition to the client. 352 On the other hand, it's possible that the servers host many 353 applications. If server 2 becomes overloaded for all applications, 354 it would be undesirable for it to have to notify the client 355 separately for each application. Therefore it also needs a way to 356 indicate that it is overloaded for all possible applications. 358 +---------------------------------------------+ 359 | Application A +----------------------+----------------------+ 360 |+------------------+ | +----------------+ | +------------------+| 361 || | | | | | | || 362 || | | | | | | || 363 || Server 1 | | | Server 2 | | | Server 3 || 364 || | | | | | | || 365 |+--------+---------+ | +-------+--------+ | +-+----------------+| 366 | | | | | | | 367 +---------+-----------+----------+-----------+ | | 368 | | | | | 369 | | | | Application B | 370 | +----------+----------------+-----------------+ 371 ``-.._ | | 372 `-..__ | _.-'' 373 `--._ | _.-'' 374 ``-._ | _.-'' 375 +-----`-.-''-----+ 376 | | 377 | | 378 | Client | 379 | | 380 +----------------+ 382 Figure 3: Multiple Application Peer to Peer Scenario 384 2.2. Agent Scenarios 386 This section describes scenarios that include a Diameter agent, 387 either in the form of a Diameter relay or Diameter proxy. These 388 scenarios do not consider Diameter redirect agents, since they are 389 more readily modeled as end-servers. 391 Figure 4 illustrates a simple Diameter agent scenario with a single 392 client, agent, and server. In this case, overload can occur at the 393 server, at the agent, or both. But in most cases, client behavior is 394 the same whether overload occurs at the server or at the agent. From 395 the client's perspective, server overload and agent overload is the 396 same thing. 398 +------------------+ 399 | | 400 | | 401 | Server | 402 | | 403 +--------+---------+ 404 | 405 | 406 +--------+---------+ 407 | | 408 | | 409 | Agent | 410 | | 411 +--------+---------+ 412 | 413 | 414 +--------+---------+ 415 | | 416 | | 417 | Client | 418 | | 419 +------------------+ 421 Figure 4: Basic Agent Scenario 423 Figure 5 shows an agent scenario with multiple servers. If server 1 424 becomes overloaded, but server 2 has sufficient reserve capacity, the 425 agent may be able to transparently divert some or all Diameter 426 requests originally bound for server 1 to server 2. 428 In most cases, the client does not have detailed knowledge of the 429 Diameter topology upstream of the agent. If the agent uses dynamic 430 discovery to find eligible servers, the set of eligible servers may 431 not be enumerable from the perspective of the client. Therefore, in 432 most cases the agent needs to deal with any upstream overload issues 433 in a way that is transparent to the client. If one server notifies 434 the agent that it has become overloaded, the notification should not 435 be passed back to the client in a way that the client could 436 mistakenly perceive the agent itself as being overloaded. If the set 437 of all possible destinations upstream of the agent no longer has 438 sufficient capacity for incoming load, the agent itself becomes 439 effectively overloaded. 441 On the other hand, there are cases where the client needs to be able 442 to select a particular server from behind an agent. For example, if 443 a Diameter request is part of a multiple-round-trip authentication, 444 or is otherwise part of a Diameter "session", it may have a 445 DestinationHost AVP that requires the request to be served by server 446 1. Therefore the agent may need to inform a client that a particular 447 upstream server is overloaded or otherwise unavailable. Note that 448 there can be many ways a server can be specified, which may have 449 different implications (e.g. by IP address, by host name, etc). 451 +------------------+ +------------------+ 452 | | | | 453 | | | | 454 | Server 1 | | Server 2 | 455 | | | | 456 +--------+-`.------+ +------.'+---------+ 457 `. .' 458 `. .' 459 `. .' 460 `. .' 461 +-------`.'--------+ 462 | | 463 | | 464 | Agent | 465 | | 466 +--------+---------+ 467 | 468 | 469 | 470 +--------+---------+ 471 | | 472 | | 473 | Client | 474 | | 475 +------------------+ 477 Figure 5: Multiple Server Agent Scenario 479 Figure 6 shows a scenario where an agent routes requests to a set of 480 servers for more than one Diameter realm and application. In this 481 scenario, if server 1 becomes overloaded or unavailable, the agent 482 may effectively operate at reduced capacity for application A, but at 483 full capacity for application B. Therefore, the agent needs to be 484 able to report that it is overloaded for one application, but not for 485 another. 487 +--------------------------------------------+ 488 | Application A +----------------------+----------------------+ 489 |+------------------+ | +----------------+ | +------------------+| 490 || | | | | | | || 491 || | | | | | | || 492 || Server 1 | | | Server 2 | | | Server 3 || 493 || | | | | | | || 494 |+---------+--------+ | +-------+--------+ | +--+---------------+| 495 | | | | | | | 496 +----------+----------+----------+-----------+ | | 497 | | | | | 498 | | | | Application B | 499 | +----------+-----------------+----------------+ 500 | | | 501 ``--.__ | _. 502 ``-.__ | __.--'' 503 `--.._ | _..--' 504 +----``-+.''-----+ 505 | | 506 | | 507 | Agent | 508 | | 509 +-------+--------+ 510 | 511 | 512 +-------+--------+ 513 | | 514 | | 515 | Client | 516 | | 517 +----------------+ 519 Figure 6: Multiple Application Agent Scenario 521 2.3. Interconnect Scenario 523 Another scenario to consider when looking at Diameter overload is 524 that of multiple network operators using Diameter components 525 connected through an interconnect service, e.g. using IPX. IPX (IP 526 eXchange) [IR.34] is an Inter-Operator IP Backbone that provides 527 roaming interconnection network between mobile operators and service 528 providers. The IPX is also used to transport Diameter signaling 529 between operators [IR.88]. Figure 7 shows two network operators with 530 an interconnect network in-between. There could be any number of 531 these networks between any two network operator's networks. 533 +-------------------------------------------+ 534 | Interconnect | 535 | | 536 | +--------------+ +--------------+ | 537 | | Server 3 |------| Server 4 | | 538 | +--------------+ +--------------+ | 539 | .' `. | 540 +------.-'--------------------------`.------+ 541 .' `. 542 .-' `. 543 ------------.'-----+ +----`.------------- 544 +----------+ | | +----------+ 545 | Server 1 | | | | Server 2 | 546 +----------+ | | +----------+ 547 | | 548 Network Operator 1 | | Network Operator 2 549 -------------------+ +------------------- 551 Figure 7: Two Network Interconnect Scenario 553 The characteristics of the information that an operator would want to 554 share over such a connection are different from the information 555 shared between components within a network operator's network. 556 Network operators may not want to convey topology or operational 557 information, which limits how much overload and loading information 558 can be sent. For the interconnect scenario shown, Server 2 may want 559 to signal overload to Server 1, to affect traffic coming from Network 560 Operator 1. 562 This case is distinct from those internal to a network operator's 563 network, where there may be many more elements in a more complicated 564 topology. Also, the elements in the interconnect network may not 565 support Diameter overload control, and the network operators may not 566 want the interconnect network to use overload or loading information. 567 They may only want the information to pass through the interconnect 568 network without further processing or action by the interconnect 569 network even if the elements in the interconnect network do support 570 Diameter overload control. 572 3. Existing Mechanisms 574 Diameter offers both implicit and explicit mechanisms for a Diameter 575 node to learn that a peer is overloaded or unreachable. The implicit 576 mechanism is simply the lack of responses to requests. If a client 577 fails to receive a response in a certain time period, it assumes the 578 upstream peer is unavailable, or overloaded to the point of effective 579 unavailability. The watchdog mechanism [RFC3539] ensures that a 580 certain rate of transaction responses occur even when there is 581 otherwise little or no other Diameter traffic. 583 The explicit mechanism can involve specific protocol error responses, 584 where an agent or server tells a downstream peer that it is either 585 too busy to handle a request (DIAMETER_TOO_BUSY) or unable to route a 586 request to an upstream destination (DIAMETER_UNABLE_TO_DELIVER), 587 perhaps because that destination itself is overloaded to the point of 588 unavailability. 590 Another explicit mechanism, a DPR (Disconnect-Peer-Request) message, 591 can be sent with a Disconnect-Cause of BUSY. This signals the 592 sender's intent to close the transport connection, and requests the 593 client not to reconnect. 595 Once a Diameter node learns that an upstream peer has become 596 overloaded via one of these mechanisms, it can then attempt to take 597 action to reduce the load. This usually means forwarding traffic to 598 an alternate destination, if available. If no alternate destination 599 is available, the node must either reduce the number of messages it 600 originates (in the case of a client) or inform the client to reduce 601 traffic (in the case of an agent.) 603 Diameter requires the use of a congestion-managed transport layer, 604 currently TCP or SCTP, to mitigate network congestion. It is 605 expected that these transports manage network congestion and that 606 issues with transport (e.g. congestion propagation and window 607 management) are managed at that level. But even with a congestion- 608 managed transport, a Diameter node can become overloaded at the 609 Diameter protocol or application layers due to the causes described 610 in Section 1.1 and congestion managed transports do not provide 611 facilities (and are at the wrong level) to handle server overload. 612 Transport level congestion management is also not sufficient to 613 address overload in cases of multi-hop and multi-destination 614 signaling. 616 4. Issues with the Current Mechanisms 618 The currently available Diameter mechanisms for indicating an 619 overload condition are not adequate to avoid service outages due to 620 overload. This inadequacy may, in turn, contribute to broader 621 congestion collapse due to unresponsive Diameter nodes causing 622 application or transport layer retransmissions. In particular, they 623 do not allow a Diameter agent or server to shed load as it approaches 624 overload. At best, a node can only indicate that it needs to 625 entirely stop receiving requests, i.e. that it has effectively 626 failed. Even that is problematic due to the inability to indicate 627 durational validity on the transient errors available in the base 628 Diameter protocol. Diameter offers no mechanism to allow a node to 629 indicate different overload states for different categories of 630 messages, for example, if it is overloaded for one Diameter 631 application but not another. 633 4.1. Problems with Implicit Mechanism 635 The implicit mechanism doesn't allow an agent or server to inform the 636 client of a problem until it is effectively too late to do anything 637 about it. The client does not know to take action until the upstream 638 node has effectively failed. A Diameter node has no opportunity to 639 shed load early to avoid collapse in the first place. 641 Additionally, the implicit mechanism cannot distinguish between 642 overload of a Diameter node and network congestion. Diameter treats 643 the failure to receive an answer as a transport failure. 645 4.2. Problems with Explicit Mechanisms 647 The Diameter specification is ambiguous on how a client should handle 648 receipt of a DIAMETER_TOO_BUSY response. The base specification 649 [RFC6733] indicates that the sending client should attempt to send 650 the request to a different peer. It makes no suggestion that the 651 receipt of a DIAMETER_TOO_BUSY response should affect future Diameter 652 messages in any way. 654 The Authentication, Authorization, and Accounting (AAA) Transport 655 Profile [RFC3539] recommends that a AAA node that receives a "Busy" 656 response failover all remaining requests to a different agent or 657 server. But while the Diameter base specification explicitly depends 658 on RFC3539 to define transport behavior, it does not refer to RFC3539 659 in the description of behavior on receipt of DIAMETER_TOO_BUSY. 660 There's a strong likelihood that at least some implementations will 661 continue to send Diameter requests to an upstream peer even after 662 receiving a DIAMETER_TOO_BUSY error. 664 BCP 41 [RFC2914] describes, among other things, how end-to-end 665 application behavior can help avoid congestion collapse. In 666 particular, an application should avoid sending messages that will 667 never be delivered or processed. The DIAMETER_TOO_BUSY behavior as 668 described in the Diameter base specification fails at this, since if 669 an upstream node becomes overloaded, a client attempts each request, 670 and does not discover the need to failover the request until the 671 initial attempt fails. 673 The situation is improved if implementations follow the [RFC3539] 674 recommendation and keep state about upstream peer overload. But even 675 then, the Diameter specification offers no guidance on how long a 676 client should wait before retrying the overloaded destination. If an 677 agent or server supports multiple realms and/or applications, 678 DIAMETER_TOO_BUSY offers no way to indicate that it is overloaded for 679 one application but not another. A DIAMETER_TOO_BUSY error can only 680 indicate overload at a "whole server" scope. 682 Agent processing of a DIAMETER_TOO_BUSY response is also problematic 683 as described in the base specification. DIAMETER_TOO_BUSY is defined 684 as a protocol error. If an agent receives a protocol error, it may 685 either handle it locally or it may forward the response back towards 686 the downstream peer. If a downstream peer receives the 687 DIAMETER_TOO_BUSY response, it may stop sending all requests to the 688 agent for some period of time, even though the agent may still be 689 able to deliver requests to other upstream peers. 691 DIAMETER_UNABLE_TO_DELIVER, or using DPR with cause code BUSY also 692 have no mechanisms for specifying the scope or cause of the failure, 693 or the durational validity. 695 The issues with error responses in [RFC6733] extend beyond the 696 particular issues for overload control and have been addressed in an 697 ad hoc fashion by various implementations. Addressing these in a 698 standard way would be a useful exercise, but it us beyond the scope 699 of this document. 701 5. Diameter Overload Case Studies 703 5.1. Overload in Mobile Data Networks 705 As the number of Third Generation (3G) and Long Term Evolution (LTE) 706 enabled smartphone devices continue to expand in mobile networks, 707 there have been situations where high signaling traffic load led to 708 overload events at the Diameter-based Home Location Registries (HLR) 709 and/or Home Subscriber Servers (HSS) [TR23.843]. The root causes of 710 the HLR congestion events were manifold but included hardware failure 711 and procedural errors. The result was high signaling traffic load on 712 the HLR and HSS. 714 The 3GPP architecture [TS23.002] makes extensive use of Diameter. It 715 is used for mobility management [TS29.272] (and others), (IP 716 Multimedia Subsystem) IMS [TS29.228] (and others), policy and 717 charging control [TS29.212] (and others) as well as other functions. 718 The details of the architecture are out of scope for this document, 719 but it is worth noting that there are quite a few Diameter 720 applications, some with quite large amounts of Diameter signaling in 721 deployed networks. 723 The 3GPP specifications do not currently address overload for 724 Diameter applications or provide an equivalent load control mechanism 725 to those provided in the more traditional SS7 elements in (Global 726 System for Mobile Communications) GSM [TS29.002]. The capabilities 727 specified in the 3GPP standards do not adequately address the 728 abnormal condition where excessively high signaling traffic load 729 situations are experienced. 731 Smartphones, an increasingly large percentage of mobile devices, 732 contribute much more heavily, relative to non-smartphones, to the 733 continuation of a registration surge due to their very aggressive 734 registration algorithms. Smartphone behavior contributes to network 735 loading and can contribute to overload conditions. The aggressive 736 smartphone logic is designed to: 738 a. always have voice and data registration, and 740 b. constantly try to be on 3G or LTE data (and thus on 3G voice or 741 VoLTE [IR.92]) for their added benefits. 743 Non-smartphones typically have logic to wait for a time period after 744 registering successfully on voice and data. 746 The smartphone aggressive registration is problematic in two ways: 748 o first by generating excessive signaling load towards the HSS that 749 is ten times that from a non-smartphone, 751 o and second by causing continual registration attempts when a 752 network failure affects registrations through the 3G data network. 754 5.2. 3GPP Study on Core Network Overload 756 A study in 3GPP SA2 on core network overload has produced the 757 technical report [TR23.843]. This enumerates several causes of 758 overload in mobile core networks including portions that are signaled 759 using Diameter. This document is a work in progress and is not 760 complete. However, it is useful for pointing out scenarios and the 761 general need for an overload control mechanism for Diameter. 763 It is common for mobile networks to employ more than one radio 764 technology and to do so in an overlay fashion with multiple 765 technologies present in the same location (such as 2nd or 3rd 766 generation mobile technologies along with LTE). This presents 767 opportunities for traffic storms when issues occur on one overlay and 768 not another as all devices that had been on the overlay with issues 769 switch. This causes a large amount of Diameter traffic as locations 770 and policies are updated. 772 Another scenario called out by this study is a flood of registration 773 and mobility management events caused by some element in the core 774 network failing. This flood of traffic from end nodes falls under 775 the network initiated traffic flood category. There is likely to 776 also be traffic resulting directly from the component failure in this 777 case. A similar flood can occur when elements or components recover 778 as well. 780 Subscriber initiated traffic floods are also indicated in this study 781 as an overload mechanism where a large number of mobile devices 782 attempting to access services at the same time, such as in response 783 to an entertainment event or a catastrophic event. 785 While this 3GPP study is concerned with the broader effects of these 786 scenarios on wireless networks and their elements, they have 787 implications specifically for Diameter signaling. One of the goals 788 of this document is to provide guidance for a core mechanism that can 789 be used to mitigate the scenarios called out by this study. 791 6. Extensibility and Application Independence 793 Given the variety of scenarios Diameter elements can be deployed in, 794 and the variety of roles they can fulfill with Diameter and other 795 technologies, a single algorithm for handling overload may not be 796 sufficient. This effort cannot anticipate all possible future 797 scenarios and roles. Extensibility, particularly of algorithms used 798 to deal with overload, will be important to cover these cases. 800 Similarly, the scopes that overload information may apply to may 801 include cases that have not yet been considered. Extensibility in 802 this area will also be important. 804 The basic mechanism is intended to be application-independent, that 805 is, a Diameter node can use it across any existing and future 806 Diameter applications and expect reasonable results. Certain 807 Diameter applications might, however, benefit from application- 808 specific behavior over and above the mechanism's defaults. For 809 example, an application specification might specify relative 810 priorities of messages or selection of a specific overload control 811 algorithm. 813 7. Solution Requirements 815 This section proposes requirements for an improved mechanism to 816 control Diameter overload, with the goals of improving the issues 817 described in Section 4 and supporting the scenarios described in 818 Section 2 820 REQ 1: The overload control mechanism MUST provide a communication 821 method for Diameter nodes to exchange load and overload 822 information. 824 REQ 2: The mechanism MUST allow Diameter nodes to support overload 825 control regardless of which Diameter applications they 826 support. Diameter clients must be able to use the received 827 load and overload information to support graceful behavior 828 during an overload condition. Graceful behavior under 829 overload conditions is best described by REQ 3. 831 REQ 3: The overload control mechanism MUST limit the impact of 832 overload on the overall useful throughput of a Diameter 833 server, even when the incoming load on the network is far in 834 excess of its capacity. The overall useful throughput under 835 load is the ultimate measure of the value of an overload 836 control mechanism. 838 REQ 4: Diameter allows requests to be sent from either side of a 839 connection and either side of a connection may have need to 840 provide its overload status. The mechanism MUST allow each 841 side of a connection to independently inform the other of 842 its overload status. 844 REQ 5: Diameter allows nodes to determine their peers via dynamic 845 discovery or manual configuration. The mechanism MUST work 846 consistently without regard to how peers are determined. 848 REQ 6: The mechanism designers SHOULD seek to minimize the amount 849 of new configuration required in order to work. For 850 example, it is better to allow peers to advertise or 851 negotiate support for the mechanism, rather than to require 852 this knowledge to be configured at each node. 854 REQ 7: The overload control mechanism and any associated default 855 algorithm(s) MUST ensure that the system remains stable. At 856 some point after an overload condition has ended, the 857 mechanism MUST enable capacity to stabilize and become equal 858 to what it would be in the absence of an overload condition. 859 Note that this also requires that the mechanism MUST allow 860 nodes to shed load without introducing non converging 861 oscillations during or after an overload condition. 863 REQ 8: Supporting nodes MUST be able to distinguish current 864 overload information from stale information, and SHOULD make 865 decisions using the most currently available information. 867 REQ 9: The mechanism MUST function across fully loaded as well as 868 quiescent transport connections. This is partially derived 869 from the requirement for stability in REQ 7. 871 REQ 10: Consumers of overload information MUST be able to determine 872 when the overload condition improves or ends. 874 REQ 11: The overload control mechanism MUST be able to operate in 875 networks of different sizes. 877 REQ 12: When a single network node fails, goes into overload, or 878 suffers from reduced processing capacity, the mechanism MUST 879 make it possible to limit the impact of this on other nodes 880 in the network. This helps to prevent a small-scale failure 881 from becoming a widespread outage. 883 REQ 13: The mechanism MUST NOT introduce substantial additional work 884 for node in an overloaded state. For example, a requirement 885 for an overloaded node to send overload information every 886 time it received a new request would introduce substantial 887 work. Existing messaging is likely to have the 888 characteristic of increasing as an overload condition 889 approaches, allowing for the possibility of increased 890 feedback for information piggybacked on it. 892 REQ 14: Some scenarios that result in overload involve a rapid 893 increase of traffic with little time between normal levels 894 and overload inducing levels. The mechanism SHOULD provide 895 for rapid feedback when traffic levels increase. 897 REQ 15: The mechanism MUST NOT interfere with the congestion control 898 mechanisms of underlying transport protocols. For example, 899 a mechanism that opened additional TCP connections when the 900 network is congested would reduce the effectiveness of the 901 underlying congestion control mechanisms. 903 REQ 16: The overload control mechanism is likely to be deployed 904 incrementally. The mechanism MUST support a mixed 905 environment where some, but not all, nodes implement it. 907 REQ 17: In a mixed environment with nodes that support the overload 908 control mechanism and that do not, the mechanism MUST result 909 in at least as much useful throughput as would have resulted 910 if the mechanism were not present. It SHOULD result in less 911 severe congestion in this environment. 913 REQ 18: In a mixed environment of nodes that support the overload 914 control mechanism and that do not, the mechanism MUST NOT 915 preclude elements that support overload control from 916 treating elements that do not support overload control in a 917 equitable fashion relative to those that do. Users and 918 operators of nodes that do not support the mechanism MUST 919 NOT unfairly benefit from the mechanism. The mechanism 920 specification SHOULD provide guidance to implementors for 921 dealing with elements not supporting overload control. 923 REQ 19: It MUST be possible to use the mechanism between nodes in 924 different realms and in different administrative domains. 926 REQ 20: Any explicit overload indication MUST be clearly 927 distinguishable from other errors reported via Diameter. 929 REQ 21: In cases where a network node fails, is so overloaded that 930 it cannot process messages, or cannot communicate due to a 931 network failure, it may not be able to provide explicit 932 indications of the nature of the failure or its levels of 933 congestion. The mechanism MUST result in at least as much 934 useful throughput as would have resulted if the overload 935 control mechanism was not in place. 937 REQ 22: The mechanism MUST provide a way for a node to throttle the 938 amount of traffic it receives from a peer node. This 939 throttling SHOULD be graded so that it can be applied 940 gradually as offered load increases. Overload is not a 941 binary state; there may be degrees of overload. 943 REQ 23: The mechanism MUST provide sufficient information to enable 944 a load balancing node to divert messages that are rejected 945 or otherwise throttled by an overloaded upstream node to 946 other upstream nodes that are the most likely to have 947 sufficient capacity to process them. 949 REQ 24: The mechanism MUST provide a mechanism for indicating load 950 levels even when not in an overloaded condition, to assist 951 nodes making decisions to prevent overload conditions from 952 occurring. 954 REQ 25: The base specification for the overload control mechanism 955 SHOULD offer general guidance on which message types might 956 be desirable to send or process over others during times of 957 overload, based on application-specific considerations. For 958 example, it may be more beneficial to process messages for 959 existing sessions ahead of new sessions. Some networks may 960 have a requirement to give priority to requests associated 961 with emergency sessions. Any normative or otherwise 962 detailed definition of the relative priorities of message 963 types during an overload condition will be the 964 responsibility of the application specification. 966 REQ 26: The mechanism MUST NOT prevent a node from prioritizing 967 requests based on any local policy, so that certain requests 968 are given preferential treatment, given additional 969 retransmission, not throttled, or processed ahead of others. 971 REQ 27: The overload control mechanism MUST NOT provide new 972 vulnerabilities to malicious attack, or increase the 973 severity of any existing vulnerabilities. This includes 974 vulnerabilities to DoS and DDoS attacks as well as replay 975 and man-in-the middle attacks. Note that the Diameter base 976 specification [RFC6733] lacks end to end security and this 977 must be considered. 979 REQ 28: The mechanism MUST NOT depend on being deployed in 980 environments where all Diameter nodes are completely 981 trusted. It SHOULD operate as effectively as possible in 982 environments where other nodes are malicious; this includes 983 preventing malicious nodes from obtaining more than a fair 984 share of service. Note that this does not imply any 985 responsibility on the mechanism to detect, or take 986 countermeasures against, malicious nodes. 988 REQ 29: It MUST be possible for a supporting node to make 989 authorization decisions about what information will be sent 990 to peer nodes based on the identity of those nodes. This 991 allows a domain administrator who considers the load of 992 their nodes to be sensitive information to restrict access 993 to that information. Of course, in such cases, there is no 994 expectation that the overload control mechanism itself will 995 help prevent overload from that peer node. 997 REQ 30: The mechanism MUST NOT interfere with any Diameter compliant 998 method that a node may use to protect itself from overload 999 from non-supporting nodes, or from denial of service 1000 attacks. 1002 REQ 31: There are multiple situations where a Diameter node may be 1003 overloaded for some purposes but not others. For example, 1004 this can happen to an agent or server that supports multiple 1005 applications, or when a server depends on multiple external 1006 resources, some of which may become overloaded while others 1007 are fully available. The mechanism MUST allow Diameter 1008 nodes to indicate overload with sufficient granularity to 1009 allow clients to take action based on the overloaded 1010 resources without unreasonably forcing available capacity to 1011 go unused. The mechanism MUST support specification of 1012 overload information with granularities of at least 1013 "Diameter node", "realm", and "Diameter application", and 1014 MUST allow extensibility for others to be added in the 1015 future. 1017 REQ 32: The mechanism MUST provide a method for extending the 1018 information communicated and the algorithms used for 1019 overload control. 1021 REQ 33: The mechanism MUST provide a default algorithm that is 1022 mandatory to implement. 1024 REQ 34: The mechanism SHOULD provide a method for exchanging 1025 overload and load information between elements that are 1026 connected by intermediaries that do not support the 1027 mechanism. 1029 8. IANA Considerations 1031 This document makes no requests of IANA. 1033 9. Security Considerations 1035 A Diameter overload control mechanism is primarily concerned with the 1036 load and overload related behavior of nodes in a Diameter network, 1037 and the information used to affect that behavior. Load and overload 1038 information is shared between nodes and directly affects the behavior 1039 and thus is potentially vulnerable to a number of methods of attack. 1041 Load and overload information may also be sensitive from both 1042 business and network protection viewpoints. Operators of Diameter 1043 equipment want to control visibility to load and overload information 1044 to keep it from being used for competitive intelligence or for 1045 targeting attacks. It is also important that the Diameter overload 1046 control mechanism not introduce any way in which any other 1047 information carried by Diameter is sent inappropriately. 1049 Note that the Diameter base specification [RFC6733] lacks end to end 1050 security, making verifying the authenticity and ownership of load and 1051 overload information difficult for non-adjacent nodes. 1052 Authentication of load and overload information helps to alleviate 1053 several of the security issues listed in this section. 1055 This document includes requirements intended to mitigate the effects 1056 of attacks and to protect the information used by the mechanism. 1058 9.1. Access Control 1060 To control the visibility of load and overload information, sending 1061 should be subject to some form of authentication and authorization of 1062 the receiver. It is also important to the receivers that they are 1063 confident the load and overload information they receive is from a 1064 legitimate source. Note that this implies a certain amount of 1065 configurability on the nodes supporting the Diameter overload control 1066 mechanism. 1068 9.2. Denial-of-Service Attacks 1070 An overload control mechanism provides a very attractive target for 1071 denial-of-service attacks. A small number of messages may affect a 1072 large service disruption by falsely reporting overload conditions. 1073 Alternately, attacking servers nearing, or in, overload may also be 1074 facilitated by disrupting their overload indications, potentially 1075 preventing them from mitigating their overload condition. 1077 A design goal for the Diameter overload control mechanism is to 1078 minimize or eliminate the possibility of using the mechanism for this 1079 type of attack. 1081 As the intent of some denial-of-service attacks is to induce overload 1082 conditions, an effective overload control mechanism should help to 1083 mitigate the effects of an such an attack. 1085 9.3. Replay Attacks 1087 An attacker that has managed to obtain some messages from the 1088 overload control mechanism may attempt to affect the behavior of 1089 nodes supporting the mechanism by sending those messages at 1090 potentially inopportune times. In addition to time shifting, replay 1091 attacks may send messages to other nodes as well (target shifting). 1093 A design goal for the Diameter overload control mechanism is to 1094 minimize or eliminate the possibility of causing disruption by using 1095 a replay attack on the Diameter overload control mechanism. 1097 9.4. Man-in-the-Middle Attacks 1099 By inserting themselves in between two nodes supporting the Diameter 1100 overload control mechanism, an attacker may potentially both access 1101 and alter the information sent between those nodes. This can be used 1102 for information gathering for business intelligence and attack 1103 targeting, as well as direct attacks. 1105 A design goal for the Diameter overload control mechanism is to 1106 minimize or eliminate the possibility of causing disruption man-in- 1107 the-middle attacks on the Diameter overload control mechanism. A 1108 transport using TLS and/or IPSEC may be desirable for this. 1110 9.5. Compromised Hosts 1112 A compromised host that supports the Diameter overload control 1113 mechanism could be used for information gathering as well as for 1114 sending malicious information to any Diameter node that would 1115 normally accept information from it. While is is beyond the scope of 1116 the Diameter overload control mechanism to mitigate any operational 1117 interruption to the compromised host, a reasonable design goal is to 1118 minimize the impact that a compromised host can have on other nodes 1119 through the use of the Diameter overload control mechanism. Of 1120 course, a compromised host could be used to cause damage in a number 1121 of other ways. This is out of scope for a Diameter overload control 1122 mechanism. 1124 10. References 1126 10.1. Normative References 1128 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1129 Requirement Levels", BCP 14, RFC 2119, March 1997. 1131 [RFC6733] Fajardo, V., Arkko, J., Loughney, J., and G. Zorn, 1132 "Diameter Base Protocol", RFC 6733, October 2012. 1134 [RFC2914] Floyd, S., "Congestion Control Principles", BCP 41, 1135 RFC 2914, September 2000. 1137 [RFC3539] Aboba, B. and J. Wood, "Authentication, Authorization and 1138 Accounting (AAA) Transport Profile", RFC 3539, June 2003. 1140 10.2. Informative References 1142 [RFC5390] Rosenberg, J., "Requirements for Management of Overload in 1143 the Session Initiation Protocol", RFC 5390, December 2008. 1145 [RFC6357] Hilt, V., Noel, E., Shen, C., and A. Abdelal, "Design 1146 Considerations for Session Initiation Protocol (SIP) 1147 Overload Control", RFC 6357, August 2011. 1149 [TR23.843] 1150 3GPP, "Study on Core Network Overload Solutions", 1151 TR 23.843 0.6.0, October 2012. 1153 [IR.34] GSMA, "Inter-Service Provider IP Backbone Guidelines", 1154 IR 34 7.0, January 2012. 1156 [IR.88] GSMA, "LTE Roaming Guidelines", IR 88 7.0, January 2012. 1158 [IR.92] GSMA, "IMS Profile for Voice and SMS", IR 92 7.0, 1159 March 2013. 1161 [TS23.002] 1162 3GPP, "Network Architecture", TS 23.002 12.0.0, 1163 September 2012. 1165 [TS29.272] 1166 3GPP, "Evolved Packet System (EPS); Mobility Management 1167 Entity (MME) and Serving GPRS Support Node (SGSN) related 1168 interfaces based on Diameter protocol", TS 29.272 11.4.0, 1169 September 2012. 1171 [TS29.212] 1172 3GPP, "Policy and Charging Control (PCC) over Gx/Sd 1173 reference point", TS 29.212 11.6.0, September 2012. 1175 [TS29.228] 1176 3GPP, "IP Multimedia (IM) Subsystem Cx and Dx interfaces; 1177 Signalling flows and message contents", TS 29.228 11.5.0, 1178 September 2012. 1180 [TS29.002] 1181 3GPP, "Mobile Application Part (MAP) specification", 1182 TS 29.002 11.4.0, September 2012. 1184 Appendix A. Contributors 1186 Significant contributions to this document were made by Adam Roach 1187 and Eric Noel. 1189 Appendix B. Acknowledgements 1191 Review of, and contributions to, this specification by Martin Dolly, 1192 Carolyn Johnson, Jianrong Wang, Imtiaz Shaikh, Jouni Korhonen, Robert 1193 Sparks, Dieter Jacobsohn, Janet Gunn, Jean-Jacques Trottin, Laurent 1194 Thiebaut, Andrew Booth, and Lionel Morand were most appreciated. We 1195 would like to thank them for their time and expertise. 1197 Authors' Addresses 1199 Eric McMurry 1200 Tekelec 1201 17210 Campbell Rd. 1202 Suite 250 1203 Dallas, TX 75252 1204 US 1206 Email: emcmurry@computer.org 1208 Ben Campbell 1209 Tekelec 1210 17210 Campbell Rd. 1211 Suite 250 1212 Dallas, TX 75252 1213 US 1215 Email: ben@nostrum.com