idnits 2.17.1 draft-ietf-bmwg-sip-bench-term-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 482 has weird spacing: '...UTs and hop- ...' -- The document date (March 12, 2012) is 4422 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 2544 == Outdated reference: A later version (-12) exists of draft-ietf-bmwg-sip-bench-meth-03 ** Downref: Normative reference to an Informational draft: draft-ietf-bmwg-sip-bench-meth (ref. 'I-D.ietf-bmwg-sip-bench-meth') == Outdated reference: A later version (-15) exists of draft-ietf-soc-overload-control-07 Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Methodology Working C. Davids 3 Group Illinois Institute of Technology 4 Internet-Draft V. Gurbani 5 Expires: September 13, 2012 Bell Laboratories, Alcatel-Lucent 6 S. Poretsky 7 Allot Communications 8 March 12, 2012 10 Terminology for Benchmarking Session Initiation Protocol (SIP) 11 Networking Devices 12 draft-ietf-bmwg-sip-bench-term-04 14 Abstract 16 This document provides a terminology for benchmarking SIP performance 17 in networking devices. Terms are included for test components, test 18 setup parameters, and performance benchmark metrics for black-box 19 benchmarking of SIP networking devices. The performance benchmark 20 metrics are obtained for the SIP control plane and media plane. The 21 terms are intended for use in a companion methodology document for 22 complete performance characterization of a device in a variety of 23 conditions making it possible to compare performance of different 24 devices. It is critical to provide test setup parameters and a 25 methodology document for SIP performance benchmarking because SIP 26 allows a wide range of configuration and operational conditions that 27 can influence performance benchmark measurements. It is necessary to 28 have terminology and methodology standards to ensure that reported 29 benchmarks have consistent definition and were obtained following the 30 same procedures. Benchmarks can be applied to compare performance of 31 a variety of SIP networking devices. 33 Status of this Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF). Note that other groups may also distribute 40 working documents as Internet-Drafts. The list of current Internet- 41 Drafts is at http://datatracker.ietf.org/drafts/current/. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on September 13, 2012. 50 Copyright Notice 52 Copyright (c) 2012 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 69 2.1. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 6 70 2.2. Benchmarking Models . . . . . . . . . . . . . . . . . . . 7 71 3. Term Definitions . . . . . . . . . . . . . . . . . . . . . . . 13 72 3.1. Protocol Components . . . . . . . . . . . . . . . . . . . 13 73 3.1.1. Session . . . . . . . . . . . . . . . . . . . . . . . 13 74 3.1.2. Signaling Plane . . . . . . . . . . . . . . . . . . . 16 75 3.1.3. Media Plane . . . . . . . . . . . . . . . . . . . . . 16 76 3.1.4. Associated Media . . . . . . . . . . . . . . . . . . . 17 77 3.1.5. Overload . . . . . . . . . . . . . . . . . . . . . . . 17 78 3.1.6. Session Attempt . . . . . . . . . . . . . . . . . . . 18 79 3.1.7. Established Session . . . . . . . . . . . . . . . . . 18 80 3.1.8. Invite-initiated Session (IS) . . . . . . . . . . . . 19 81 3.1.9. Non-INVITE-initiated Session (NS) . . . . . . . . . . 20 82 3.1.10. Session Attempt Failure . . . . . . . . . . . . . . . 20 83 3.1.11. Standing Sessions Count . . . . . . . . . . . . . . . 21 84 3.2. Test Components . . . . . . . . . . . . . . . . . . . . . 21 85 3.2.1. Emulated Agent . . . . . . . . . . . . . . . . . . . . 21 86 3.2.2. Signaling Server . . . . . . . . . . . . . . . . . . . 22 87 3.2.3. SIP-Aware Stateful Firewall . . . . . . . . . . . . . 22 88 3.2.4. SIP Transport Protocol . . . . . . . . . . . . . . . . 23 89 3.3. Test Setup Parameters . . . . . . . . . . . . . . . . . . 24 90 3.3.1. Session Attempt Rate . . . . . . . . . . . . . . . . . 24 91 3.3.2. IS Media Attempt Rate . . . . . . . . . . . . . . . . 24 92 3.3.3. Establishment Threshold Time . . . . . . . . . . . . . 25 93 3.3.4. Session Duration . . . . . . . . . . . . . . . . . . . 25 94 3.3.5. Media Packet Size . . . . . . . . . . . . . . . . . . 26 95 3.3.6. Media Offered Load . . . . . . . . . . . . . . . . . . 26 96 3.3.7. Media Session Hold Time . . . . . . . . . . . . . . . 27 97 3.3.8. Loop Detection Option . . . . . . . . . . . . . . . . 27 98 3.3.9. Forking Option . . . . . . . . . . . . . . . . . . . . 28 99 3.4. Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . 29 100 3.4.1. Registration Rate . . . . . . . . . . . . . . . . . . 29 101 3.4.2. Session Establishment Rate . . . . . . . . . . . . . . 29 102 3.4.3. Session Capacity . . . . . . . . . . . . . . . . . . . 30 103 3.4.4. Session Overload Capacity . . . . . . . . . . . . . . 31 104 3.4.5. Session Establishment Performance . . . . . . . . . . 31 105 3.4.6. Session Attempt Delay . . . . . . . . . . . . . . . . 32 106 3.4.7. IM Rate . . . . . . . . . . . . . . . . . . . . . . . 32 107 4. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 33 108 5. Security Considerations . . . . . . . . . . . . . . . . . . . 33 109 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 34 110 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 34 111 7.1. Normative References . . . . . . . . . . . . . . . . . . . 34 112 7.2. Informational References . . . . . . . . . . . . . . . . . 34 113 Appendix A. White Box Benchmarking Terminology . . . . . . . . . 35 114 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 35 116 1. Terminology 118 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 119 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 120 document are to be interpreted as described in BCP 14, RFC2119 121 [RFC2119]. RFC 2119 defines the use of these key words to help make 122 the intent of standards track documents as clear as possible. While 123 this document uses these keywords, this document is not a standards 124 track document. The term Throughput is defined in RFC2544 [RFC2544]. 126 For the sake of clarity and continuity, this document adopts the 127 template for definitions set out in Section 2 of RFC 1242 [RFC1242]. 128 Definitions are indexed and grouped together for ease of reference. 130 This document uses existing terminology defined in other BMWG work. 131 Examples include, but are not limited to: 133 Device under test (DUT) (c.f., Section 3.1.1 RFC 2285 [RFC2285]). 134 System under test (SUT) (c.f., Section 3.1.2, RFC 2285 [RFC2285]). 136 Many commonly used SIP terms in this document are defined in RFC 3261 137 [RFC3261]. For convenience the most important of these are 138 reproduced below. Use of these terms in this document is consistent 139 with their corresponding definition in [RFC3261]. 140 o Call Stateful: A proxy is call stateful if it retains state for a 141 dialog from the initiating INVITE to the terminating BYE request. 142 A call stateful proxy is always transaction stateful, but the 143 converse is not necessarily true. 144 o Stateful Proxy: A logical entity that maintains the client and 145 server transaction state machines defined by this specification 146 during the processing of a request, also known as a transaction 147 stateful proxy. The behavior of a stateful proxy is further 148 defined in Section 16. A (transaction) stateful proxy is not the 149 same as a call stateful proxy. 150 o Stateless Proxy: A logical entity that does not maintain the 151 client or server transaction state machines defined in this 152 specification when it processes requests. A stateless proxy 153 forwards every request it receives downstream and every response 154 it receives upstream. 155 o Back-to-back User Agent: A back-to-back user agent (B2BUA) is a 156 logical entity that receives a request and processes it as a user 157 agent server (UAS). In order to determine how the request should 158 be answered, it acts as a user agent client (UAC) and generates 159 requests. Unlike a proxy server, it maintains dialog state and 160 must participate in all requests sent on the dialogs it has 161 established. Since it is a concatenation of a UAC and a UAS, no 162 explicit definitions are needed for its behavior. 164 o Loop: A request that arrives at a proxy, is forwarded, and later 165 arrives back at the same proxy. When it arrives the second time, 166 its Request-URI is identical to the first time, and other header 167 fields that affect proxy operation are unchanged, so that the 168 proxy will make the same processing decision on the request it 169 made the first time. Looped requests are errors, and the 170 procedures for detecting them and handling them are described by 171 the protocol. 173 2. Introduction 175 Service Providers are now planning Voice Over IP (VoIP) and 176 Multimedia network deployments using the IETF developed Session 177 Initiation Protocol (SIP) [RFC3261]. SIP is a signaling protocol 178 originally intended to be used for the dynamic establishment, 179 disconnection and modification of streams of media between end users. 180 As it has evolved it has been adopted for use in a growing number of 181 applications and features. Many of these result in the creation of a 182 media stream, but some do not. Instead, they create other services 183 tailored to the end-users' immediate needs or preferences. The set 184 of benchmarking terms provided in this document is intended for use 185 with any SIP-enabled device performing SIP functions in the interior 186 of the network. The performance of end-user devices is outside the 187 scope of this document. 189 VoIP with SIP has led to the development of new networking devices 190 including SIP Server, Session Border Controllers (SBC), Back-to-back 191 user agents (B2BUA) and SIP-Aware Stateful Firewall. The mix of 192 voice and IP functions in these various devices has produced 193 inconsistencies in vendor reported performance metrics and has caused 194 confusion in the service provider community. SIP allows a wide range 195 of configuration and operational conditions that can influence 196 performance benchmark measurements. When the device under test 197 terminates or relays both media and signaling, for example, it is 198 important to be able to correlate a signaling measurement with the 199 media plane measurements to determine the system performance. As 200 devices and their functions proliferate, the need to have a 201 consistent set of metrics to compare their performance becomes 202 increasingly urgent. This document and its companion methodology 203 document [I-D.ietf-bmwg-sip-bench-meth] provide a set of black-box 204 benchmarks for describing and comparing the performance of devices 205 that incorporate the SIP User Agent Client and Server functions and 206 that operate in the network's core. 208 The definition of SIP performance benchmarks necessarily includes 209 definitions of Test Setup Parameters and a test methodology. These 210 enable the Tester to perform benchmarking tests on different devices 211 and to achieve comparable and repeatable results. This document 212 provides a common set of well-defined terms for Test Components, Test 213 Setup Parameters, and Benchmarks. All the benchmarks defined are 214 black-box measurements of the SIP Control (Signaling) plane. The 215 Test Setup Parameters and Benchmarks defined in this document are 216 intended for use with the companion Methodology document. Benchmarks 217 of internal DUT characteristics (also known as white-box benchmarks) 218 such as Session Attempt Arrival Rate, which is measured at the DUT, 219 are described in Appendix A to allow additional characterization of 220 DUT behavior with different distribution models. 222 2.1. Scope 224 The scope of this work item is summarized as follows: 225 o This terminology document describes SIP signaling (control- plane) 226 performance benchmarks for black-box measurements of SIP 227 networking devices. Stress and debug scenarios are not addressed 228 in this work item. 229 o The DUT must be an RFC 3261 capable network equipment. This may 230 be a Registrar, Redirect Server, Stateless Proxy or Stateful 231 Proxy. A DUT MAY also include a B2BUA, SBC functionality (this is 232 referred to as the "Signaling Server".) The DUT MAY be a multi- 233 port SIP-to-switched network gateway implemented as a SIP UAC or 234 UAS. 235 o The DUT MAY have an internal SIP Application Level Gateway (ALG), 236 firewall, and/or a Network Address Translator (NAT). This is 237 referred to as the "SIP Aware Stateful Firewall." 238 o The DUT or SUT MUST NOT be end user equipment, such as personal 239 digital assistant, a computer-based client, or a user terminal. 240 o The Tester acts as multiple "Emulated Agents" (EA) that initiate 241 (or respond to) SIP messages as session endpoints and source (or 242 receive) associated media for established connections. 243 o Control Signaling in presence of Media 244 * The media performance is not benchmarked in this work item. 245 * It is RECOMMENDED that control plane benchmarks are performed 246 with media present, but this is optional. 247 * The SIP INVITE requests MUST include the SDP body. 248 * The type of DUT dictates whether the associated media streams 249 traverse the DUT or SUT. Both scenarios are within the scope 250 of this work item. 251 * SIP is frequently used to create media streams; the control 252 plane and media plane are treated as orthogonal to each other 253 in this document. While many devices support the creation of 254 media streams, benchmarks that measure the performance of these 255 streams are outside the scope of this document and its 256 companion methodology document [I-D.ietf-bmwg-sip-bench-meth]. 257 Tests may be performed with or without the creation of media 258 streams. The presence or absence of media streams MUST be 259 noted as a condition of the test as the performance of SIP 260 devices may vary accordingly. Even if the media is used during 261 benchmarking, only the SIP performance will be benchmarked, not 262 the media performance or quality. 263 o Both INVITE and non-INVITE scenarios (such as Instant Messages or 264 IM) are addressed in this document. However, benchmarking SIP 265 presence is not a part of this work item. 266 o Different transport mechanisms -- such as UDP, TCP, SCTP, or TLS 267 -- may be used; however, the specific transport mechanism MUST be 268 noted as a condition of the test as the performance of SIP devices 269 may vary accordingly. 270 o Looping and forking options are also considered since they impact 271 processing at SIP proxies. 272 o REGISTER and INVITE requests may be challenged or remain 273 unchallenged for authentication purpose as this may impact the 274 performance benchmarks. Any observable performance degradation 275 due to authentication is of interest to the SIP community. 276 Whether or not the REGISTER and INVITE requests are challenged is 277 a condition of test and will be recorded and reported. 278 o Re-INVITE requests are not considered in scope of this work item. 279 o Only session establishment is considered for the performance 280 benchmarks. Session disconnect is not considered in the scope of 281 this work item. 282 o SIP Overload [I-D.ietf-soc-overload-design] is within the scope of 283 this work item. We test to failure and then can continue to 284 observe and record the behavior of the system after failures are 285 recorded. The cause of failure is not within the scope of this 286 work. We note the failure and may continue to test until a 287 different failure or condition is encountered. Considerations on 288 how to handle overload are deferred to work progressing in the SOC 289 working group [I-D.ietf-soc-overload-control]. Vendors are, of 290 course, free to implement their specific overload control behavior 291 as the expected test outcome if it is different from the IETF 292 recommendations. However, such behavior MUST be documented and 293 interpreted appropriately across multiple vendor implementations. 294 This will make it more meaningful to compare the performance of 295 different SIP overload implementations. 296 o IMS-specific scenarios are not considered, but test cases can be 297 applied with 3GPP-specific SIP signaling and the P-CSCF as a DUT. 299 2.2. Benchmarking Models 301 This section shows ten models to be used when benchmarking SIP 302 performance of a networking device. Figure 1 shows shows the 303 configuration needed to benchmark the tester itself. This model will 304 be used to establish the limitations of the test apparatus. 306 +--------+ Signaling request +--------+ 307 | +----------------------------->| | 308 | Tester | | Tester | 309 | | Signaling response | EA | 310 | |<-----------------------------+ | 311 +--------+ +--------+ 312 /|\ /|\ 313 | Media | 314 +=========================================+ 316 Figure 1: Baseline performance of the Emulated Agent without a DUT 317 present 319 Figure 2 shows the DUT playing the role of a user agent client (UAC), 320 initiating requests and absorbing responses. This model can be used 321 to baseline the performance of the DUT acting as an UAC without 322 associated media. 324 +--------+ Signaling request +--------+ 325 | +----------------------------->| | 326 | DUT | | Tester | 327 | | Signaling response | EA | 328 | |<-----------------------------+ | 329 +--------+ +--------+ 331 Figure 2: Baseline performance for DUT acting as a user agent client 332 without associated media 334 Figure 3 shows the DUT plays the role of a user agent server (UAS), 335 absorbing the requests and sending responses. This model can be used 336 as a baseline performance for the DUT acting as a UAS without 337 associated media. 339 +--------+ Signaling request +--------+ 340 | +----------------------------->| | 341 | Tester | | DUT | 342 | EA | Response | | 343 | |<-----------------------------+ | 344 +--------+ +--------+ 346 Figure 3: Baseline performance for DUT acting as a user agent server 347 without associated media 349 Figure 4 shows the DUT plays the role of a user agent client (UAC), 350 initiating requests and absorbing responses. This model can be used 351 as a baseline performance for the DUT acting as a UAC with associated 352 media. 354 +--------+ Signaling request +--------+ 355 | +----------------------------->| | 356 | DUT | | Tester | 357 | | Signaling response | EA | 358 | |<-----------------------------+ | 359 | |<============ Media =========>| | 360 +--------+ +--------+ 362 Figure 4: Baseline performance for DUT acting as a user agent client 363 with associated media 365 Figure 5 shows the DUT plays the role of a user agent server (UAS), 366 absorbing the requests and sending responses. This model can be used 367 as a baseline performance for the DUT acting as a UAS with associated 368 media. 370 +--------+ Signaling request +--------+ 371 | +----------------------------->| | 372 | Tester | | DUT | 373 | EA | Response | | 374 | |<-----------------------------+ | 375 | |<============ Media =========>| | 376 +--------+ +--------+ 378 Figure 5: Baseline performance for DUT acting as a user agent server 379 with associated media 381 Figure 6 shows that the Tester acts as the initiating and responding 382 EA as the DUT/SUT forwards Session Attempts. 384 +--------+ Session +--------+ Session +--------+ 385 | | Attempt | | Attempt | | 386 | |<------------+ |<------------+ | 387 | | | | | | 388 | | Response | | Response | | 389 | Tester +------------>| DUT +------------>| Tester | 390 | (EA) | | | | (EA) | 391 | | | | | | 392 +--------+ +--------+ +--------+ 394 Figure 6: DUT/SUT performance benchmark for session establishment 395 without media 397 Figure 7 is used when performing those same benchmarks with 398 Associated Media traversing the DUT/SUT. 400 +--------+ Session +--------+ Session +--------+ 401 | | Attempt | | Attempt | | 402 | |<------------+ |<------------+ | 403 | | | | | | 404 | | Response | | Response | | 405 | Tester +------------>| DUT +------------>| Tester | 406 | | | | | (EA) | 407 | | Media | | Media | | 408 | |<===========>| |<===========>| | 409 +--------+ +--------+ +--------+ 411 Figure 7: DUT/SUT performance benchmark for session establishment 412 with media traversing the DUT 414 Figure 8 is to be used when performing those same benchmarks with 415 Associated Media, but the media does not traverse the DUT/SUT. 416 Again, the benchmarking of the media is not within the scope of this 417 work item. The SIP control signaling is benchmarked in the presence 418 of Associated Media to determine if the SDP body of the signaling and 419 the handling of media impacts the performance of the DUT/SUT. 421 +--------+ Session +--------+ Session +--------+ 422 | | Attempt | | Attempt | | 423 | |<------------+ |<------------+ | 424 | | | | | | 425 | | Response | | Response | | 426 | Tester +------------>| DUT +------------>| Tester | 427 | | | | | (EA) | 428 | | | | | | 429 +--------+ +--------+ +--------+ 430 /|\ /|\ 431 | Media | 432 +=============================================+ 434 Figure 8: DUT/SUT performance benchmark for session establishment 435 with media external to the DUT 437 Figure 9 is used when performing benchmarks that require one or more 438 intermediaries to be in the signaling path. The intent is to gather 439 benchmarking statistics with a series of DUTs in place. In this 440 topology, the media is delivered end-to-end and does not traverse the 441 DUT. 443 SUT 444 '--------------------------^^^^^^^^-----------------------` 445 / \ 446 +------+ Session +---+ Session +---+ Session +------+ 447 | | Attempt | | Attempt | | Attempt | | 448 | |<---------+ |<---------+ |<---------+ | 449 | | | | | | | | 450 | | Response | | Response | | Response | | 451 |Tester+--------->|DUT+--------->|DUT|--------->|Tester| 452 | | | | | | | | 453 | | | | | | | | 454 +------+ +---+ +---+ +------+ 455 /|\ /|\ 456 | Media | 457 +=============================================+ 459 Figure 9: DUT/SUT performance benchmark for session establishment 460 with multiple DUTs and end-to-end media 462 Figure 10 is used when performing benchmarks that require one or more 463 intermediaries to be in the signaling path. The intent is to gather 464 benchmarking statistics with a series of DUTs in place. In this 465 topology, the media is delivered hop-by-hop through each DUT. 467 SUT 468 '--------------------------^^^^^^^^-----------------------` 469 / \ 470 +------+ Session +---+ Session +---+ Session +------+ 471 | | Attempt | | Attempt | | Attempt | | 472 | |<---------+ |<---------+ |<---------+ | 473 | | | | | | | | 474 | | Response | | Response | | Response | | 475 |Tester+--------->|DUT+--------->|DUT|--------->|Tester| 476 | | | | | | | | 477 | | | | | | | | 478 | |<========>| |<========>| |<========>| | 479 +------+ Media +---+ Media +---+ Media +------+ 481 Figure 10: DUT/SUT performance benchmark for session establishment 482 with multiple DUTs and hop- by-hop media 484 Figure 11 illustrates the SIP signaling for an Established Session. 485 The Tester acts as the EAs and initiates a Session Attempt with the 486 DUT/SUT. When the Emulated Agent (EA) receives a 200 OK from the 487 DUT/SUT that session is considered to be an Established Session. The 488 illustration indicates three states of the session bring created by 489 the EA - Attempting, Established, and Disconnecting. Sessions can be 490 one of two type: Invite-Initiated Session (IS) or Non-Invite 491 Initiated Session (NS). Failure for the DUT/SUT to successfully 492 respond within the Establishment Threshold Time is considered a 493 Session Attempt Failure. SIP Invite messages MUST include the SDP 494 body to specify the Associated Media. Use of Associated Media, to be 495 sourced from the EA, is optional. When Associated Media is used, it 496 may traverse the DUT/SUT depending upon the type of DUT/SUT. The 497 Associated Media is shown in Figure 11 as "Media" connected to media 498 ports M1 and M2 on the EA. After the EA sends a BYE, the session 499 disconnects. Performance test cases for session disconnects are not 500 considered in this work item (the BYE request is shown for 501 completeness.) 502 EA DUT/SUT M1 M2 503 | | | | 504 | INVITE | | | 505 --------+--------------->| | | 506 | | | | 507 Attempting | | | 508 | 200 OK | | | 509 --------|<-------------- | | | 510 | | | | 511 | | | | 512 | | | Media | 513 Established | |<=====>| 514 | | | | 515 | BYE | | | 516 --------+--------------> | | | 517 | | | | 518 Disconnecting | | | 519 | 200 OK | | | 520 --------|<-------------- | | | 521 | | | | 523 Figure 11: Basic SIP test topology 525 3. Term Definitions 527 3.1. Protocol Components 529 3.1.1. Session 531 Definition: 532 The combination of signaling and media messages and processes that 533 enable two or more participants to communicate. 535 Discussion: 536 SIP messages in the signaling plane can be used to create and 537 manage applications for one or more end users. SIP is often used 538 to create and manage media streams in support of applications. A 539 session always has a signaling component and may have a media 540 component. Therefore, a Session may be defined as signaling only 541 or a combination of signaling and media (c.f. Associated Media, 542 see Section 3.1.4). SIP includes definitions of a Call-ID, a 543 dialogue and a transaction that support this application. A 544 growing number of usages and applications do not require the 545 creation of associated media. The first such usage was the 546 REGISTER. Applications that use the MESSAGE and SUBSCRIBE/NOTIFY 547 methods also do not require SIP to manage media streams. The 548 terminology Invite-initiated Session (IS) and Non-invite initiated 549 Session (NS) are used to distinguish between these different 550 usages. 552 A Session in the context of this document, is considered to be a 553 vector with three components: 555 1. A component in the signaling plane (SIP messages), sess.sig; 556 2. A media component in the media plane (RTP and SRTP streams for 557 example), sess.med (which may be null); 558 3. A control component in the media plane (RTCP messages for 559 example), sess.medc (which may be null). 561 An IS is expected to have non-null sess.sig and sess.med 562 components. The use of control protocols in the media component 563 is media dependent, thus the expected presence or absence of 564 sess.medc is media dependent and test-case dependent. An NS is 565 expected to have a non-null sess.sig component, but null sess.med 566 and sess.medc components. 568 Packets in the Signaling Plane and Media Plane will be handled by 569 different processes within the DUT. They will take different 570 paths within a SUT. These different processes and paths may 571 produce variations in performance. The terminology and benchmarks 572 defined in this document and the methodology for their use are 573 designed to enable us to compare performance of the DUT/SUT with 574 reference to the type of SIP-supported application it is handling. 576 Note that one or more sessions can simultaneously exist between 577 any participants. This can be the case, for example, when the EA 578 sets up both an IM and a voice call through the DUT/SUT. These 579 sessions are represented as an array session[x]. 581 Sessions will be represented as a vector array with three 582 components, as follows: 583 session-> 584 session[x].sig, the signaling component 585 session[x].medc, the media control component (e.g. RTCP) 586 session[x].med[y], an array of associated media streams (e.g. 587 RTP, SRTP, RTSP, MSRP). This media component may consist of zero 588 or more media streams. 589 Figure 12 models the vectors of the session. 591 Measurement Units: 593 N/A. 595 Issues: 596 None. 598 See Also: 599 Media Plane 600 Signaling Plane 601 Associated Media 602 Invite-initiated Session (IS) 603 Non-invite-initiated Session (NS) 605 |\ 606 | 607 | \ 608 sess.sig| 609 | \ 610 | 611 | \ 612 | o 613 | / 614 | / | 615 | / 616 | / | 617 | / 618 | / | 619 | / 620 | / | sess.medc 621 |/_____________________ 622 / / 623 / | 624 / / 625 sess.med / | 626 /_ _ _ _ _ _ _ _/ 627 / 628 / 629 / 630 / 632 Figure 12: Application or session components 634 3.1.2. Signaling Plane 636 Definition: 637 The control plane in which SIP messages [RFC3261] are exchanged 638 between SIP Agents [RFC3261] to establish a connection for media 639 exchange. 641 Discussion: 642 SIP messages are used to establish sessions in several ways: 643 directly between two User Agents [RFC3261], through a Proxy Server 644 [RFC3261], or through a series of Proxy Servers. The Signaling 645 Plane MUST include the Session Description Protocol (SDP). 646 The Signaling Plane for a single Session is represented by 647 session.sig. 649 Measurement Units: 650 N/A. 652 Issues: 653 None. 655 See Also: 656 Media Plane 657 EAs 659 3.1.3. Media Plane 661 Definition: 662 The data plane in which one or more media streams and their 663 associated media control protocols are exchanged after a media 664 connection has been created by the exchange of signaling messages 665 in the Signaling Plane. 667 Discussion: 668 Media may also be known as the "bearer channel". The Media Plane 669 MUST include the media control protocol, if one is used, and the 670 media stream(s). Examples of media are audio, video, whiteboard, 671 and instant messaging service. The media stream is described in 672 the SDP of the Signaling Plane. 673 The media for a single Session is represented by session.med. The 674 media control protocol is represented by session.medc. 676 Measurement Units: 677 N/A. 679 Issues: 680 None. 682 See Also: 683 Signaling Plane 685 3.1.4. Associated Media 687 Definition: 688 Media that corresponds to an 'm' line in the SDP payload of the 689 Signaling Plane. 691 Discussion: 692 Any media protocol MAY be used. 693 For any session's signaling component, represented as session.sig, 694 there may be one or multiple associated media streams which are 695 represented be a vector array session.med[y], which is referred to 696 as the Associated Media. 698 Measurement Units: 699 N/A. 701 Issues: 702 None. 704 3.1.5. Overload 706 Definition: 707 Overload is defined as the state where a SIP server does not have 708 sufficient resources to process all incoming SIP messages 709 [I-D.ietf-soc-overload-design]. The distinction between an 710 overload condition and other failure scenarios is outside the 711 scope of this document which is blackbox testing. 713 Discussion: 714 Under overload conditions, all or a percentage of Session Attempts 715 will fail due to lack of resources. SIP server resources may 716 include CPU processing capacity, network bandwidth, input/output 717 queues, or disk resources. Any combination of resources may be 718 fully utilized when a SIP server (the DUT/SUT) is in the overload 719 condition. For proxy-only type of devices, overload issues will 720 be dominated by the number of signaling messages they can handle 721 in a unit time before their throughput starts to drop. 723 For UA-type of network devices (e.g., gateways), overload must 724 necessarily include both the signaling traffic and media streams. 725 It is expected that the amount of signaling that a UA can handle 726 is inversely proportional to the amount of media streams currently 727 handled by that UA. 729 Measurement Units: 730 N/A. 732 Issues: 733 The issue of overload in SIP networks is currently a topic of 734 discussion in the SIPPING WG. The normal response to an overload 735 stimulus -- sending a 503 response -- is considered inadequate and 736 new response codes and behaviors may be specified in the future. 737 From the perspective of this document, all these responses will be 738 considered to be failures. There is thus no dependency between 739 this document and the ongoing work on the treatment of overload 740 failure. 742 3.1.6. Session Attempt 744 Definition: 745 A SIP Session for which the EA has sent the SIP INVITE or 746 SUBSCRIBE NOTIFY and has not yet received a message response from 747 the DUT/SUT. 749 Discussion: 750 The attempted session may be an IS or an NS. The Session Attempt 751 includes SIP INVITEs and SUBSCRIBE/NOTIFY messages. It also 752 includes all INVITEs that are rejected for lack of authentication 753 information. 755 Measurement Units: 756 N/A. 758 Issues: 759 None. 761 See Also: 762 Session 763 Session Attempt Rate 764 Invite-initiated Session 765 Non-Invite initiated Session 767 3.1.7. Established Session 768 Definition: 769 A SIP session for which the EA acting as the UE/UA has received a 770 200 OK message from the DUT/SUT. 772 Discussion: 773 An Established Session MAY be type INVITE-Session (IS) or Non- 774 INVITE Session (NS). 776 Measurement Units: 777 N/A. 779 Issues: 780 None. 782 See Also: 783 Invite-initiated Session 784 Session Attempting State 785 Session Disconnecting State 787 3.1.8. Invite-initiated Session (IS) 789 Definition: 790 A Session that is created by an exchange of messages in the 791 Signaling Plane, the first of which is a SIP INVITE request. 793 Discussion: 794 An IS is identified by the Call-ID, To-tag, and From-tag of the 795 SIP message that establishes the session. These three fields are 796 used to identify a SIP Dialog (RFC3261 [RFC3261]). An IS may have 797 Associated Media description in the SDP body. An IS may have 798 multiple Associated Media streams. The inclusion of media is test 799 case dependent. An IS is successfully established if the 800 following two conditions are met: 801 1. Sess.sig is established by the end of Establishment Threshold 802 Time (c.f. Section 3.3.3), and 803 2. If a media session is described in the SDP body of the 804 signaling message, then the media session is established by 805 the end of Establishment Threshold Time (c.f. Section 3.3.3). 807 Measurement Units: 808 N/A. 810 Issues: 811 None. 813 See Also: 814 Session 815 Non-Invite initiated Session 816 Associated Media 818 3.1.9. Non-INVITE-initiated Session (NS) 820 Definition: 821 A session that is created by an exchange of messages in the 822 Signaling Plane that does not include an initial SIP INVITE 823 message. 825 Discussion: 826 An NS is successfully established if the Session Attempt via a 827 non- INVITE request results in the EA receiving a 2xx reply from 828 the DUT/SUT before the expiration of the Establishment Threshold 829 timer (c.f., Section 3.3.3). An example of a NS is a session 830 created by the SUBSCRIBE request. 832 Measurement Units: 833 N/A. 835 Issues: 836 None. 838 See Also: 839 Session 840 Invite-initiated Session 842 3.1.10. Session Attempt Failure 844 Definition: 845 A session attempt that does not result in an Established Session. 847 Discussion: 848 The session attempt failure may be indicated by the following 849 observations at the EA: 850 1. Receipt of a SIP 4xx, 5xx, or 6xx class response to a Session 851 Attempt. 852 2. The lack of any received SIP response to a Session Attempt 853 within the Establishment Threshold Time (c.f. Section 3.3.3). 855 Measurement Units: 856 N/A. 858 Issues: 859 None. 861 See Also: 862 Session Attempt 864 3.1.11. Standing Sessions Count 866 Definition: 867 The number of Sessions currently established on the DUT/SUT at any 868 instant. 870 Discussion: 871 The number of Standing Sessions is influenced by the Session 872 Duration and the Session Attempt Rate. Benchmarks MUST be 873 reported with the maximum and average Standing Sessions for the 874 DUT/SUT for the duration of the test. In order to determine the 875 maximum and average Standing Sessions on the DUT/SUT for the 876 duration of the test it is necessary to make periodic measurements 877 of the number of Standing Sessions on the DUT/SUT. The 878 recommended value for the measurement period is 1 second. Since 879 we cannot directly poll the DUT/SUT, we take the number of 880 standing sessions on the DUT/SUT to be the number of distinct 881 calls as measured by the number of distinct Call-IDs that the EA 882 is processing at the time of measurement. 884 Measurement Units: 885 Number of sessions 887 Issues: 888 None. 890 See Also: 891 Session Duration 892 Session Attempt Rate 893 Session Attempt Rate 895 3.2. Test Components 897 3.2.1. Emulated Agent 898 Definition: 899 A device in test topology that initiates/responds to SIP messages 900 as one or more session endpoints and, wherever applicable, 901 sources/receives Associated Media for Established Sessions. 903 Discussion: 904 The EA functions in the signaling and media planes. The Tester 905 may act as multiple EAs. 907 Measurement Units: 908 N/A 910 Issues: 911 None. 913 See Also: 914 Media Plane 915 Signaling Plane 916 Established Session 917 Associated Media 919 3.2.2. Signaling Server 921 Definition: 922 Device in test topology that acts to create sessions between EAs 923 in the media plane. This device is either a DUT or component of a 924 SUT. 926 Discussion: 927 The DUT MUST be a RFC 3261 capable network equipment such as a 928 Registrar, Redirect Server, User Agent Server, Stateless Proxy, or 929 Stateful Proxy. A DUT MAY also include B2BUA or SBC. 931 Measurement Units: 932 NA 934 Issues: 935 None. 937 See Also: 938 Signaling Plane 940 3.2.3. SIP-Aware Stateful Firewall 941 Definition: 942 Device in test topology that provides Denial-of-Service (DoS) 943 Protection to the Signaling and Media Planes for the EAs and 944 Signaling Server 946 Discussion: 947 The SIP-Aware Stateful Firewall MAY be an internal component or 948 function of the Session Server. The SIP-Aware Stateful Firewall 949 MAY be a standalone device. If it is a standalone device it MUST 950 be paired with a Signaling Server. If it is a standalone device 951 it MUST be benchmarked as part of a SUT. SIP-Aware Stateful 952 Firewalls MAY include Network Address Translation (NAT) 953 functionality. Ideally, the inclusion of the SIP-Aware Stateful 954 Firewall as a SUT has no degradation to the measured performance 955 benchmarks. 957 Measurement Units: 958 N/A 960 Issues: 961 None. 963 See Also: 965 3.2.4. SIP Transport Protocol 967 Definition: 968 The protocol used for transport of the Signaling Plane messages. 970 Discussion: 971 Performance benchmarks may vary for the same SIP networking device 972 depending upon whether TCP, UDP, TLS, SCTP, or another transport 973 layer protocol is used. For this reason it MAY be necessary to 974 measure the SIP Performance Benchmarks using these various 975 transport protocols. Performance Benchmarks MUST report the SIP 976 Transport Protocol used to obtain the benchmark results. 978 Measurement Units: 979 TCP,UDP, SCTP, TLS over TCP, TLS over UDP, or TLS over SCTP 981 Issues: 982 None. 984 See Also: 986 3.3. Test Setup Parameters 988 3.3.1. Session Attempt Rate 990 Definition: 991 Configuration of the EA for the number of sessions that the EA 992 attempts to establish with the DUT/SUT over a specified time 993 interval. 995 Discussion: 996 The Session Attempt Rate can cause variation in performance 997 benchmark measurements. Since this is the number of sessions 998 configured on the Tester, some sessions may not be successfully 999 established on the DUT. A session may be either an IS or an NS. 1001 Measurement Units: 1002 Session attempts per second 1004 Issues: 1005 None. 1007 See Also: 1008 Session 1009 Session Attempt 1011 3.3.2. IS Media Attempt Rate 1013 Definition: 1014 Configuration on the EA for number of ISs with Associated Media to 1015 be established at the DUT per continuous one- second time 1016 intervals. 1018 Discussion: 1019 Note that a Media Session MUST be associated with an IS. In this 1020 document we assume that there is a one to one correspondence 1021 between IS session attempts and Media Session attempts. By 1022 including this definition we leave open the possibility that there 1023 may be an IS that does not include a media description. Also note 1024 that the IS Media Attempt Rate defines the number of media 1025 sessions we are trying to create, not the number of media sessions 1026 that are actually created. Variations in the Media Session 1027 Attempt Rate might cause variations in performance benchmark 1028 measurements. Some attempts might not result in successful 1029 sessions established on the DUT. 1031 Measurement Units: 1032 session attempts per second (saps) 1034 Issues: 1035 None. 1037 See Also: 1038 IS 1040 3.3.3. Establishment Threshold Time 1042 Definition: 1043 Configuration of the EA for representing the amount of time that 1044 an EA will wait before declaring a Session Attempt Failure. 1046 Discussion: 1047 This time duration is test dependent. 1048 It is RECOMMENDED that the Establishment Threshold Time value be 1049 set to Timer B (for ISs) or Timer F (for NSs) as specified in RFC 1050 3261, Table 4 [RFC3261]. Following the default value of T1 1051 (500ms) specified in the table and a constant multiplier of 64 1052 gives a value of 32 seconds for this timer (i.e., 500ms * 64 = 1053 32s). 1055 Measurement Units: 1056 seconds 1058 Issues: 1059 None. 1061 See Also: 1062 session establishment failure 1064 3.3.4. Session Duration 1066 Definition: 1067 Configuration of the EA that represents the amount of time that 1068 the SIP dialog is intended to exist between the two EAs associated 1069 with the test. 1071 Discussion: 1072 The time at which the BYE is sent will control the Session 1073 Duration 1074 Normally the Session Duration will be the same as the Media 1075 Session Hold Time. However, it is possible that the dialog 1076 established between the two EAs can support different media 1077 sessions at different points in time. Providing both parameters 1078 allows the testing agency to explore this possibility. 1080 Measurement Units: 1081 seconds 1083 Issues: 1084 None. 1086 See Also: 1087 Media Session Hold Time 1089 3.3.5. Media Packet Size 1091 Definition: 1092 Configuration on the EA for a fixed size of packets used for media 1093 streams. 1095 Discussion: 1096 For a single benchmark test, all sessions use the same size packet 1097 for media streams. The size of packets can cause variation in 1098 performance benchmark measurements. 1100 Measurement Units: 1101 bytes 1103 Issues: 1104 None. 1106 See Also: 1108 3.3.6. Media Offered Load 1110 Definition: 1111 Configuration of the EA for the constant rate of Associated Media 1112 traffic offered by the EA to the DUT/SUT for one or more 1113 Established Sessions of type IS. 1115 Discussion: 1116 The Media Offered Load to be used for a test MUST be reported with 1117 three components: 1118 1. per Associated Media stream; 1119 2. per IS; 1120 3. aggregate. 1121 For a single benchmark test, all sessions use the same Media 1122 Offered Load per Media Stream. There may be multiple Associated 1123 Media streams per IS. The aggregate is the sum of all Associated 1124 Media for all IS. 1126 Measurement Units: 1127 packets per second (pps) 1129 Issues: 1130 None. 1132 See Also: 1133 Established Session 1134 Invite Initiated Session 1135 Associated Media 1137 3.3.7. Media Session Hold Time 1139 Definition: 1140 Parameter configured at the EA, that represents the amount of time 1141 that the Associated Media for an Established Session of type IS 1142 will last. 1144 Discussion: 1145 The Associated Media streams may be bi-directional or uni- 1146 directional as indicated in the test methodology. 1147 Normally the Media Session Hold Time will be the same as the 1148 Session Duration. However, it is possible that the dialog 1149 established between the two EAs can support different media 1150 sessions at different points in time. Providing both parameters 1151 allows the testing agency to explore this possibility. 1153 Measurement Units: 1154 seconds 1156 Issues: 1157 None. 1159 See Also: 1160 Associated Media 1161 Established Session 1162 Invite-initiated Session (IS) 1164 3.3.8. Loop Detection Option 1165 Definition: 1166 An option that causes a Proxy to check for loops in the routing of 1167 a SIP request before forwarding the request. 1169 Discussion: 1170 This is an optional process that a SIP proxy may employ; the 1171 process is described under Proxy Behavior in RFC 3261 in Section 1172 16.3 Request Validation and that section also contains suggestions 1173 as to how the option could be implemented. Any procedure to 1174 detect loops will use processor cycles and hence could impact the 1175 performance of a proxy. 1177 Measurement Units: 1178 NA 1180 Issues: 1181 None. 1183 See Also: 1185 3.3.9. Forking Option 1187 Definition: 1188 An option that enables a Proxy to fork requests to more than one 1189 destination. 1191 Discussion: 1192 This is an process that a SIP proxy may employ to find the UAS. 1193 The option is described under Proxy Behavior in RFC 3261 in 1194 Section 16.1. A proxy that uses forking must maintain state 1195 information and this will use processor cycles and memory. Thus 1196 the use of this option could impact the performance of a proxy and 1197 different implementations could produce different impacts. 1198 SIP supports serial or parallel forking. When performing a test, 1199 the type of forking mode MUST be indicated. 1201 Measurement Units: 1202 The number of endpoints that will receive the forked invitation. 1203 A value of 1 indicates that the request is destined to only one 1204 endpoint, a value of 2 indicates that the request is forked to two 1205 endpoints, and so on. This is an integer value ranging between 1 1206 and N inclusive, where N is the maximum number of endpoints to 1207 which the invitation is sent. 1208 Type of forking used, namely parallel or serial. 1210 Issues: 1211 None. 1213 See Also: 1215 3.4. Benchmarks 1217 3.4.1. Registration Rate 1219 Definition: 1220 The maximum number of registrations that can be successfully 1221 completed by the DUT/SUT in a given time period without 1222 registration failures in that time period. 1224 Discussion: 1225 This benchmark is obtained with zero failure in which 100% of the 1226 registrations attempted by the EA are successfully completed by 1227 the DUT/SUT. The registration rate provisioned on the Emulated 1228 Agent is raised and lowered as described in the algorithm in the 1229 companion methodology draft [I-D.ietf-bmwg-sip-bench-meth] until a 1230 traffic load consisting of registrations at the given attempt rate 1231 over the sustained period of time identified by T in the algorithm 1232 completes without failure. 1234 Measurement Units: 1235 registrations per second (rps) 1237 Issues: 1238 None. 1240 See Also: 1242 3.4.2. Session Establishment Rate 1244 Definition: 1245 The maximum number of sessions that can be successfully completed 1246 by the DUT/SUT in a given time period without session 1247 establishment failures in that time period. 1249 Discussion: 1250 This benchmark is obtained with zero failure in which 100% of the 1251 sessions attempted by the emulated Agent are successfully 1252 completed by the DUT/SUT. The session attempt rate provisioned on 1253 the EA is raised and lowered as described in the algorithm in the 1254 accompanying methodology document, until a traffic load at the 1255 given attempt rate over the sustained period of time identified by 1256 T in the algorithm completes without any failed session attempts. 1258 Sessions may be IS or NS or a mix of both and will be defined in 1259 the particular test. 1261 Measurement Units: 1262 sessions per second (sps) 1264 Issues: 1265 None. 1267 See Also: 1268 Invite-initiated Sessions 1269 Non-INVITE initiated Sessions 1270 Session Attempt Rate 1272 3.4.3. Session Capacity 1274 Definition: 1275 The maximum value of Standing Sessions Count achieved by the DUT/ 1276 SUT during a time period T in which the EA is sending session 1277 establishment messages at the Session Establishment Rate. 1279 Discussion: 1280 Sessions may be IS or NS. If they are IS they can be with or 1281 without media. When benchmarking Session Capacity for sessions 1282 with media it is required that these sessions be permanently 1283 established (i.e., they remain active for the duration of the 1284 test.) This can be achieved by causing the EA not to send a BYE 1285 for the duration of the testing. In the signaling plane, this 1286 requirement means that the dialog lasts as long as the test lasts. 1287 When media is present, the Media Session Hold Time MUST be set to 1288 infinity so that sessions remain established for the duration of 1289 the test. If the DUT/SUT is dialog-stateful, then we expect its 1290 performance will be impacted by setting Media Session Hold Time to 1291 infinity, since the DUT/SUT will need to allocate resources to 1292 process and store the state information. The report of the 1293 Session Capacity must include the Session Establishment Rate at 1294 which it was measured. 1296 Measurement Units: 1297 sessions 1299 Issues: 1300 None. 1302 See Also: 1303 Established Session 1304 Session Attempt Rate 1305 Session Attempt Failure 1307 3.4.4. Session Overload Capacity 1309 Definition: 1310 The maximum number of Established Sessions that can exist 1311 simultaneously on the DUT/SUT until it stops responding to Session 1312 Attempts. 1314 Discussion: 1315 Session Overload Capacity is measured after the Session Capacity 1316 is measured. The Session Overload Capacity is greater than or 1317 equal to the Session Capacity. When benchmarking Session Overload 1318 Capacity, continue to offer Session Attempts to the DUT/SUT after 1319 the first Session Attempt Failure occurs and measure Established 1320 Sessions until no there is no SIP message response for the 1321 duration of the Establishment Threshold. It is worth noting that 1322 the Session Establishment Performance is expected to decrease 1323 after the first Session Attempt Failure occurs. 1325 Units: 1326 Sessions 1328 Issues: 1329 None. 1331 See Also: 1332 Overload 1333 Session Capacity 1334 Session Attempt Failure 1336 3.4.5. Session Establishment Performance 1338 Definition: 1339 The percent of Session Attempts that become Established Sessions 1340 over the duration of a benchmarking test. 1342 Discussion: 1343 Session Establishment Performance is a benchmark to indicate 1344 session establishment success for the duration of a test. The 1345 duration for measuring this benchmark is to be specified in the 1346 Methodology. The Session Duration SHOULD be configured to 1347 infinity so that sessions remain established for the entire test 1348 duration. 1350 Session Establishment Performance is calculated as shown in the 1351 following equation: 1353 Session Establishment = Total Established Sessions 1354 Performance -------------------------- 1355 Total Session Attempts 1357 Session Establishment Performance may be monitored real-time 1358 during a benchmarking test. However, the reporting benchmark MUST 1359 be based on the total measurements for the test duration. 1361 Measurement Units: 1362 Percent (%) 1364 Issues: 1365 None. 1367 See Also: 1368 Established Session 1369 Session Attempt 1371 3.4.6. Session Attempt Delay 1373 Definition: 1374 The average time measured at the EA for a Session Attempt to 1375 result in an Established Session. 1377 Discussion: 1378 Time is measured from when the EA sends the first INVITE for the 1379 call-ID in the case of an IS. Time is measured from when the EA 1380 sends the first non-INVITE message in the case of an NS. Session 1381 Attempt Delay MUST be measured for every established session to 1382 calculate the average. Session Attempt Delay MUST be measured at 1383 the Maximum Session Establishment Rate. 1385 Measurement Units: 1386 Seconds 1388 Issues: 1389 None. 1391 See Also: 1392 Maximum Session Establishment Rate 1394 3.4.7. IM Rate 1395 Definition: 1396 Maximum number of IM messages completed by the DUT/SUT. 1398 Discussion: 1399 For a UAS, the definition of success is the receipt of an IM 1400 request and the subsequent sending of a final response. 1401 For a UAC, the definition of success is the sending of an IM 1402 request and the receipt of a final response to it. For a proxy, 1403 the definition of success is as follows: 1404 A. the number of IM requests it receives from the upstream client 1405 MUST be equal to the number of IM requests it sent to the 1406 downstream server; and 1407 B. the number of IM responses it receives from the downstream 1408 server MUST be equal to the number of IM requests sent to the 1409 downstream server; and 1410 C. the number of IM responses it sends to the upstream client 1411 MUST be equal to the number of IM requests it received from 1412 the upstream client. 1414 Measurement Units: 1415 IM messages per second 1417 Issues: 1418 None. 1420 See Also: 1422 4. IANA Considerations 1424 This document requires no IANA considerations. 1426 5. Security Considerations 1428 Documents of this type do not directly affect the security of 1429 Internet or corporate networks as long as benchmarking is not 1430 performed on devices or systems connected to production networks. 1431 Security threats and how to counter these in SIP and the media layer 1432 is discussed in RFC3261 [RFC3261], RFC 3550 [RFC3550], RFC3711 1433 [RFC3711] and various other drafts. This document attempts to 1434 formalize a set of common terminology for benchmarking SIP networks. 1435 Packets with unintended and/or unauthorized DSCP or IP precedence 1436 values may present security issues. Determining the security 1437 consequences of such packets is out of scope for this document. 1439 6. Acknowledgments 1441 The authors would like to thank Keith Drage, Cullen Jennings, Daryl 1442 Malas, Al Morton, and Henning Schulzrinne for invaluable 1443 contributions to this document. 1445 7. References 1447 7.1. Normative References 1449 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1450 Requirement Levels", BCP 14, RFC 2119, March 1997. 1452 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 1453 Network Interconnect Devices", RFC 2544, March 1999. 1455 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1456 A., Peterson, J., Sparks, R., Handley, M., and E. 1457 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1458 June 2002. 1460 [I-D.ietf-bmwg-sip-bench-meth] 1461 Davids, C., Gurbani, V., and S. Poretsky, "Methodology for 1462 Benchmarking SIP Networking Devices", 1463 draft-ietf-bmwg-sip-bench-meth-03 (work in progress), 1464 March 2011. 1466 7.2. Informational References 1468 [RFC2285] Mandeville, R., "Benchmarking Terminology for LAN 1469 Switching Devices", RFC 2285, February 1998. 1471 [RFC1242] Bradner, S., "Benchmarking terminology for network 1472 interconnection devices", RFC 1242, July 1991. 1474 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1475 Jacobson, "RTP: A Transport Protocol for Real-Time 1476 Applications", STD 64, RFC 3550, July 2003. 1478 [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. 1479 Norrman, "The Secure Real-time Transport Protocol (SRTP)", 1480 RFC 3711, March 2004. 1482 [I-D.ietf-soc-overload-design] 1483 Hilt, V., Noel, E., Shen, C., and A. Abdelal, "Design 1484 Considerations for Session Initiation Protocol (SIP) 1485 Overload Control", draft-ietf-soc-overload-design-08 (work 1486 in progress), July 2011. 1488 [I-D.ietf-soc-overload-control] 1489 Gurbani, V., Hilt, V., and H. Schulzrinne, "Session 1490 Initiation Protocol (SIP) Overload Control", 1491 draft-ietf-soc-overload-control-07 (work in progress), 1492 January 2012. 1494 Appendix A. White Box Benchmarking Terminology 1496 Session Attempt Arrival Rate 1498 Definition: 1499 The number of Session Attempts received at the DUT/SUT over a 1500 specified time period. 1502 Discussion: 1503 Sessions Attempts are indicated by the arrival of SIP INVITES OR 1504 SUBSCRIBE NOTIFY messages. Session Attempts Arrival Rate 1505 distribution can be any model selected by the user of this 1506 document. It is important when comparing benchmarks of different 1507 devices that same distribution model was used. Common 1508 distributions are expected to be Uniform and Poisson. 1510 Measurement Units: 1511 Session attempts/sec 1513 Issues: 1514 None. 1516 See Also: 1517 Session Attempt 1519 Authors' Addresses 1521 Carol Davids 1522 Illinois Institute of Technology 1523 201 East Loop Road 1524 Wheaton, IL 60187 1525 USA 1527 Phone: +1 630 682 6024 1528 Email: davids@iit.edu 1529 Vijay K. Gurbani 1530 Bell Laboratories, Alcatel-Lucent 1531 1960 Lucent Lane 1532 Rm 9C-533 1533 Naperville, IL 60566 1534 USA 1536 Phone: +1 630 224 0216 1537 Email: vkg@bell-labs.com 1539 Scott Poretsky 1540 Allot Communications 1541 300 TradeCenter, Suite 4680 1542 Woburn, MA 08101 1543 USA 1545 Phone: +1 508 309 2179 1546 Email: sporetsky@allot.com