idnits 2.17.1 draft-huang-bmwg-virtual-network-performance-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 10, 2017) is 2603 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC1242' is defined on line 688, but no explicit reference was found in the text == Unused Reference: 'RFC2234' is defined on line 697, but no explicit reference was found in the text == Unused Reference: 'RFC2544' is defined on line 701, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2234 (Obsoleted by RFC 4234) Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BMWG L. Huang, Ed. 3 Internet-Draft R. Gu, Ed. 4 Intended status: Informational China Mobile 5 Expires: September 11, 2017 Bob. Mandeville 6 Iometrix 7 Brooks. Hickman 8 Spirent Communications 9 March 10, 2017 11 Benchmarking Methodology for Virtualization Network Performance 12 draft-huang-bmwg-virtual-network-performance-02 14 Abstract 16 As the virtual network has been widely established in IDC, the 17 performance of virtual network has become a valuable consideration to 18 the IDC managers. This draft introduces a benchmarking methodology 19 for virtualization network performance based on virtual switch. 21 Status of This Memo 23 This Internet-Draft is submitted to IETF in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on September 11, 2017. 38 Copyright Notice 40 Copyright (c) 2017 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 53 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 54 3. Test Considerations . . . . . . . . . . . . . . . . . . . . . 3 55 4. Key Performance Indicators . . . . . . . . . . . . . . . . . 5 56 5. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 6 57 6. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 7 58 6.1. Throughput . . . . . . . . . . . . . . . . . . . . . . . 7 59 6.1.1. Objectives . . . . . . . . . . . . . . . . . . . . . 7 60 6.1.2. Configuration parameters . . . . . . . . . . . . . . 7 61 6.1.3. Test parameters . . . . . . . . . . . . . . . . . . . 8 62 6.1.4. Test process . . . . . . . . . . . . . . . . . . . . 8 63 6.1.5. Test result format . . . . . . . . . . . . . . . . . 8 64 6.2. Frame loss rate . . . . . . . . . . . . . . . . . . . . . 8 65 6.2.1. Objectives . . . . . . . . . . . . . . . . . . . . . 9 66 6.2.2. Configuration parameters . . . . . . . . . . . . . . 9 67 6.2.3. Test parameters . . . . . . . . . . . . . . . . . . . 9 68 6.2.4. Test process . . . . . . . . . . . . . . . . . . . . 9 69 6.2.5. Test result format . . . . . . . . . . . . . . . . . 10 70 6.3. CPU consumption . . . . . . . . . . . . . . . . . . . . . 10 71 6.3.1. Objectives . . . . . . . . . . . . . . . . . . . . . 10 72 6.3.2. Configuration parameters . . . . . . . . . . . . . . 10 73 6.3.3. Test parameters . . . . . . . . . . . . . . . . . . . 11 74 6.3.4. Test process . . . . . . . . . . . . . . . . . . . . 11 75 6.3.5. Test result format . . . . . . . . . . . . . . . . . 11 76 6.4. MEM consumption . . . . . . . . . . . . . . . . . . . . . 12 77 6.4.1. Objectives . . . . . . . . . . . . . . . . . . . . . 12 78 6.4.2. Configuration parameters . . . . . . . . . . . . . . 12 79 6.4.3. Test parameters . . . . . . . . . . . . . . . . . . . 13 80 6.4.4. Test process . . . . . . . . . . . . . . . . . . . . 13 81 6.4.5. Test result format . . . . . . . . . . . . . . . . . 13 82 6.5. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 14 83 6.5.1. Objectives . . . . . . . . . . . . . . . . . . . . . 15 84 6.5.2. Configuration parameters . . . . . . . . . . . . . . 15 85 6.5.3. Test parameters . . . . . . . . . . . . . . . . . . . 15 86 6.5.4. Test process . . . . . . . . . . . . . . . . . . . . 15 87 6.5.5. Test result format . . . . . . . . . . . . . . . . . 16 88 7. Security Considerations . . . . . . . . . . . . . . . . . . . 16 89 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 90 9. Normative References . . . . . . . . . . . . . . . . . . . . 16 91 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 17 93 1. Introduction 95 As the virtual network has been widely established in IDC, the 96 performance of virtual network has become a valuable consideration to 97 the IDC managers. This draft introduces a benchmarking methodology 98 for virtualization network performance based on virtual switch as the 99 DUT. 101 2. Terminology 103 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 104 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 105 document are to be interpreted as described in [RFC2119]. 107 3. Test Considerations 109 In a conventional test setup with Non-Virtual test ports, it is quite 110 legitimate to assume that test ports provide the golden standard in 111 measuring the performance metrics. If test results are sub optimal, 112 it is automatically assumed that the Device-Under-Test (DUT) is at 113 fault. For example, when testing throughput at a given frame size, 114 if the test result shows less than 100% throughput, we can safely 115 conclude that it's the DUT that can not deliver line rate forwarding 116 at that frame size(s). We never doubt that the tester can be an 117 issue. 119 While in a virtual test environment where both the DUT as well as the 120 test tool itself are software based, it's quite a different story. 121 Just like the DUT, tester running as software will have its own 122 performance peak under various conditions. 124 There are two types of vSwitch according to different installation 125 location. One is VM based vSwitch which is installed on a virtual 126 machine, another is vSwitch directly installed on the host OS 127 (similar to hypervisor).The latter is much more popular currently. 129 Tester's calibration is essential in benchmarking testing in a 130 virtual environment. Furthermore, to reduce the enormous combination 131 of various conditions, tester must be calibrated with the exact same 132 combination and parameter settings the user wants to measure against 133 the DUT. A slight variation of conditions and parameter values will 134 cause inaccurate measurements of the DUT. 136 While it is difficult to list the exact combination and parameter 137 settings, the following table attempts to give the most common 138 example how to calibrate a tester before testing a DUT (VSWITCH). 140 Sample calibration permutation: 142 ---------------------------------------------------------------- 143 | Hypervisor | VM VNIC | VM Memory | Frame | | 144 | Type | Speed |CPU Allocation | Size | Throughput| 145 ---------------------------------------------------------------- 146 | ESXi 1G/10G 512M/1Core | 64 | | 147 | | 128 | | 148 | | 256 | | 149 | | 512 | | 150 | | 1024 | | 151 | | 1518 | | 152 ---------------------------------------------------------------- 154 Figure 1: Sample Calibration Permutation 156 Key points are as following: 158 a) The hypervisor type is of ultimate importance to the test results. 159 VM tester(s) MUST be installed on the same hypervisor type as the DUT 160 (VSWITCH). Different hypervisor type has an influence on the test 161 result. 163 b) The VNIC speed will have an impact on testing results. Testers 164 MUST calibrate against all VNIC speeds. 166 c) VM allocations of CPU resources and memory have an influence on 167 test results. 169 d) Frame sizes will affect the test results dramatically due to the 170 nature of virtual machines. 172 e) Other possible extensions of above table: The number of VMs to be 173 created, latency reading, one VNIC per VM vs. multiple VM sharing one 174 VNIC, and uni-directional traffic vs. bi-directional traffic. 176 Besides, the compute environment including the hardware should be 177 also recorded. 179 ----------------------------------------------------- 180 | Compute encironment componenets | Model | 181 ----------------------------------------------------- 182 | CPU | | 183 ----------------------------------------------------- 184 | Memory | | 185 ----------------------------------------------------- 186 | Hard Disk | | 187 ----------------------------------------------------- 188 | 10G Adaptors | | 189 ----------------------------------------------------- 190 | Blade/Motherboard | | 191 ----------------------------------------------------- 193 Figure 2: Compute Environment 195 It's important to confirm test environment for tester's calibration 196 as close to the environment a virtual DUT (VSWITCH) involved in for 197 the benchmark test. Key points which SHOULD be noticed in test setup 198 are listed as follows. 200 1. One or more VM tester(s) need to be created for both traffic 201 generation and analysis. 203 2. vSwitch has an influence on performance penalty due to extra 204 resource occupation. 206 3. VNIC and its type is needed in the test setup to once again 207 accommodate performance penalty when DUT (VSWITCH) is created. 209 In summary, calibration should be done in such an environment that 210 all possible factors which may negatively impact test results should 211 be taken into consideration. 213 4. Key Performance Indicators 215 We listed numbers of key performance indicators for virtual network 216 below: 218 a) Throughput under various frame sizes: forwarding performance under 219 various frame sizes is a key performance indicator of interest. 221 b) DUT consumption of CPU: when adding one or more VM(s), DUT 222 (VSWITCH) will consume more CPU. Vendors can allocate appropriate 223 CPU to reach the line rate performance. 225 c) DUT consumption of MEM: when adding one or more VM(s), DUT 226 (VSWITCH) will consume more memory. Vendors can allocate appropriate 227 MEM to reach the line rate performance. 229 d) Latency readings: Some applications are highly sensitive on 230 latency. It's important to get the latency reading with respective 231 to various conditions. 233 Other indicators such as VxLAN maximum supported by the virtual 234 switch and so on can be added in the scene when VxLAN is needed. 236 5. Test Setup 238 The test setup is classified into two traffic models: Model A and 239 Model B. 241 In traffic model A: A physical tester connects to the server which 242 bears the DUT (VSWITCH) and Virtual tester to verify the benchmark of 243 server. 245 ________________________________________ 246 | | 247 ----------------- | ---------------- ---------------- | 248 |Physical tester|------|---|DUT (VSWITCH) |----|Virtual tester| | 249 ----------------- | ---------------- ---------------- | 250 | Server | 251 |________________________________________| 253 Figure 3: test model A 255 In traffic model B: Two virtual testers are used to verify the 256 benchmark. In this model, two testers are installed in one server. 258 ______________________________________________________________ 259 | | 260 | ---------------- ---------------- ---------------- | 261 | |Virtual tester|----|DUT (VSWITCH) |-----|Virtual tester| | 262 | ---------------- ---------------- ---------------- | 263 | Server | 264 |______________________________________________________________| 266 Figure 4: test model B 268 In our test, the test bed is constituted by physical servers of the 269 Dell with a pair of 10GE NIC and physical tester. Virtual tester 270 which occupies 2 vCPU and 8G MEM and DUT (VSWITCH) are installed in 271 the server. 10GE switch and 1GE switch are used for test traffic and 272 management respectively. 274 This test setup is also available in the VxLAN measurement. 276 6. Benchmarking Tests 278 6.1. Throughput 280 Unlike traditional test cases where the DUT and the tester are 281 separated, virtual network test has been brought in unparalleled 282 challenges. In virtual network test, the virtual tester and the DUT 283 (VSWITCH) are in one server which means they are physically 284 converged, so the test and DUT (VSWITCH) are sharing the same CPU and 285 MEM resources of one server. Theoretically, the virtual tester's 286 operation may have influence on the DUT (VSWITCH)'s performance. 287 However, for the specialty of virtualization, this method is the only 288 way to test the performance of a virtual DUT. 290 Under the background of existing technology, when we test the virtual 291 switch's throughput, the concept of traditional physical switch 292 CANNOT be applicable. The traditional throughput indicates the 293 switches' largest forwarding capability, for certain bytes selected 294 and under zero-packet-lose conditions. But in virtual environments, 295 virtual variations on virtual network will be much greater than that 296 of dedicated physical devices. As the DUT and the tester cannot be 297 separated, it proves that the DUT (VSWITCH) realize such network 298 performances under certain circumstances. 300 Therefore, we change the bytes in virtual environment to test the 301 maximum value which we think of the indicator of throughput. It's 302 conceivable that the throughput should be tested on both the test 303 model A and B. The tested throughput has certain referential 304 meanings to value the performance of the virtual DUT. 306 6.1.1. Objectives 308 The objective of the test is to determine the throughput of the DUT 309 (VSWITCH), which the DUT can support. 311 6.1.2. Configuration parameters 313 Network parameters should be defined as follows: 315 a) the number of virtual tester (VMs) 317 b) the number of vNIC of virtual tester 319 c) the CPU type of the server 321 d) vCPU allocated for virtual tester (VMs) 322 e) memory allocated for virtual tester (VMs) 324 f) the number and rate of server NIC 326 6.1.3. Test parameters 328 a) test repeated times 330 b) test frame length 332 6.1.4. Test process 334 1. Configure the VM tester to offer traffic to the V-Switch. 336 2. Increase the number of vCPU in the tester until the traffic has 337 no packet loss. 339 3. Record the max throughput on VSwitch. 341 4. Change the frame length and repeat from step1 to step4. 343 6.1.5. Test result format 345 -------------------------- 346 | Byte| Throughput (Gbps)| 347 -------------------------- 348 | 64 | | 349 -------------------------- 350 | 128 | | 351 -------------------------- 352 | 256 | | 353 -------------------------- 354 | 512 | | 355 -------------------------- 356 | 1024| | 357 -------------------------- 358 | 1518| | 359 -------------------------- 361 Figure 5: test result format 363 6.2. Frame loss rate 365 Frame loss rate is also an important indicator in evaluating the 366 performance of virtual switch.As is defined in RFC 1242, percentage 367 of frames that should have been forwarded which actually fails to be 368 forwarded due to lack of resources needs to be tested.Both model A 369 and model B are tested.Frame loss rate is an important indicator in 370 evaluating the performance of virtual switches. 372 6.2.1. Objectives 374 The objective of the test is to determine the frame loss rate under 375 different data rates and frame sizes.. 377 6.2.2. Configuration parameters 379 Network parameters should be defined as follows: 381 a) the number of virtual tester (VMs) 383 b) the number of vNIC of virtual tester 385 c) the CPU type of the server 387 d) vCPU allocated for virtual tester (VMs) 389 e) memory allocated for virtual tester (VMs) 391 f) the number and rate of server NIC 393 6.2.3. Test parameters 395 a) test repeated times 397 b) test frame length 399 c) test frame rate 401 6.2.4. Test process 403 1. Configure the VM tester to offer traffic to the V-Switch with the 404 input frame changing from the maximum rate to the rate with no frame 405 loss at reducing 10% intervals according to RFC 2544. 407 2. Record the input frame count and output count on VSwitch. 409 3. Calculate the frame loss percentage under different frame rate. 411 4. Change the frame length and repeat from step1 to step4. 413 6.2.5. Test result format 415 ----------------------------------------------------------------- 416 |Byte|Maxmum rate |90% Maximum |80% Maximum |...| rate with | 417 | | (Gbps) | rate (Gbps)| rate (Gbps)| | no loss (Gbps)| 418 ----------------------------------------------------------------- 419 | 64| | | | | | 420 ----------------------------------------------------------------- 421 | 128| | | | | | 422 ----------------------------------------------------------------- 423 | 256| | | | | | 424 ----------------------------------------------------------------- 425 | 512| | | | | | 426 ----------------------------------------------------------------- 427 |1024| | | | | | 428 ----------------------------------------------------------------- 429 |1518| | | | | | 430 ----------------------------------------------------------------- 432 Figure 6: test result format 434 6.3. CPU consumption 436 The objective of the test is to determine the CPU load of 437 DUT(VSWITCH). The operation of DUT (VSWITCH) can increase the CPU 438 load of host server. Different V-Switches have different CPU 439 occupation. This can be an important indicator in benchmarking the 440 virtual network performance. 442 6.3.1. Objectives 444 The objective of this test is to verify the CPU consumption caused by 445 the DUT (VSWITCH). 447 6.3.2. Configuration parameters 449 Network parameters should be defined as follows: 451 a) the number of virtual tester (VMs) 453 b) the number of vNIC of virtual tester 455 c) the CPU type of the server 457 d) vCPU allocated for virtual tester (VMs) 459 e) memory allocated for virtual tester (VMs) 460 f) the number and rate of server NIC 462 6.3.3. Test parameters 464 a) test repeated times 466 b) test frame length 468 c) traffic rate 470 6.3.4. Test process 472 1. Configure the VM tester to offer traffic to the V-Switch with 473 certain traffic rate. The traffic rate could be different ratio of 474 NIC's speed. 476 2. Record vSwitch's CPU usage on the host OS if no packets loss 477 happens. 479 3. Change the traffic rate and repeat from step1 to step2. 481 4. Change the frame length and repeat from step1 to step3. 483 6.3.5. Test result format 484 -------------------------------------------------- 485 | Byte| Traffic Rate(Gbps)| CPU usage of vSwitch | 486 -------------------------------------------------- 487 | | 50% of NIC speed | | 488 | |------------------------------------------- 489 | 64 | 75% | | 490 | |------------------------------------------- 491 | | 90% | | 492 -------------------------------------------------- 493 | | 50% of NIC speed | | 494 | |------------------------------------------- 495 | 128 | 75% | | 496 | |------------------------------------------- 497 | | 90% | | 498 -------------------------------------------------- 499 ~ ~ ~ ~ 500 -------------------------------------------------- 501 | | 50% of NIC speed | | 502 | |------------------------------------------- 503 |1500 | 75% | | 504 | |------------------------------------------- 505 | | 90% | | 506 -------------------------------------------------- 508 Figure 7: test result format 510 6.4. MEM consumption 512 The objective of the test is to determine the Memory load of 513 DUT(VSWITCH). The operation of DUT (VSWITCH) can increase the Memory 514 load of host server. Different V-Switches have different memory 515 occupation. This can be an important indicator in benchmarking the 516 virtual network performance. 518 6.4.1. Objectives 520 The objective of this test is to verify the memory consumption by the 521 DUT (VSWITCH) on the Host server. 523 6.4.2. Configuration parameters 525 Network parameters should be defined as follows: 527 a) the number of virtual tester (VMs) 529 b) the number of vNIC of virtual tester 531 c) the CPU type of the server 532 d) vCPU allocated for virtual tester (VMs) 534 e) memory allocated for virtual tester (VMs) 536 f) the number and rate of server NIC 538 6.4.3. Test parameters 540 a) test repeated times 542 b) test frame length 544 6.4.4. Test process 546 1. Configure the VM tester to offer traffic to the V-Switch with 547 certain traffic rate. The traffic rate could be different ratio of 548 NIC's speed. 550 2. Record vSwitch's MEM usage on the host OS if no packets loss 551 happens. 553 3. Change the traffic rate and repeat from step1 to step2. 555 4. Change the frame length and repeat from step1 to step3. 557 6.4.5. Test result format 558 -------------------------------------------------- 559 | Byte| Traffic Rate(Gbps)| MEM usage of vSwitch | 560 -------------------------------------------------- 561 | | 50% of NIC speed | | 562 | |------------------------------------------- 563 | 64 | 75% | | 564 | |------------------------------------------- 565 | | 90% | | 566 -------------------------------------------------- 567 | | 50% of NIC speed | | 568 | |------------------------------------------- 569 | 128 | 75% | | 570 | |------------------------------------------- 571 | | 90% | | 572 -------------------------------------------------- 573 ~ ~ ~ ~ 574 -------------------------------------------------- 575 | | 50% of NIC speed | | 576 | |------------------------------------------- 577 |1500 | 75% | | 578 | |------------------------------------------- 579 | | 90% | | 580 -------------------------------------------------- 582 Figure 8: test result format 584 6.5. Latency 586 Physical tester's time refers from its own clock or other time 587 source, such as GPS, which can achieve the accuracy of 10ns. While 588 in virtual network circumstances, the virtual tester gets its 589 reference time from the clock of Linux systems. However, due to 590 current methods, the clock of different servers or VMs can't 591 synchronize accuracy. Although VMs of some higher versions of CentOS 592 or Fedora can achieve the accuracy of 1ms, we can get better results 593 if the network can provide better NTP connections. 595 Instead of finding a better synchronization of clock to improve the 596 accuracy of the test, we consider to use an echo server in order to 597 forward the traffic back to the vitual switch. 599 We use the traffic model A as the latency test model by substituting 600 the virtual tester with the echo server, which is used to echo the 601 traffic.Thus the one-way delay equals to half of the round-trip time. 603 ________________________________________ 604 | | 605 ----------------- | ---------------- ---------------- | 606 |Physical tester|------|---|DUT (VSWITCH) |---| echo server | | 607 ----------------- | ---------------- ---------------- | 608 | Server | 609 |________________________________________| 611 Figure 9: time delay test model 613 6.5.1. Objectives 615 The objective of this test is to verify the DUT (VSWITCH) for latency 616 of the flow. This can be an important indicator in benchmarking the 617 virtual network performance. 619 6.5.2. Configuration parameters 621 Network parameters should be defined as follows: 623 a) the number of virtual tester (VMs) 625 b) the number of vNIC of virtual tester 627 c) the CPU type of the server 629 d) vCPU allocated for virtual tester (VMs) 631 e) memory allocated for virtual tester (VMs) 633 f) the number and rate of server NIC 635 6.5.3. Test parameters 637 a) test repeated times 639 b) test frame length 641 6.5.4. Test process 643 1. Configure the physical tester to offer traffic to the V-Switch 644 with the traffic value of throughput tested in 6.1. 646 2. Under the same throughput, record the time of transmitting the 647 traffic and receiving the traffic by the physical tester with and 648 without the DUT. 650 3. Calculate the time difference value between receiving and 651 transmitting the traffic.. 653 4. Calculate the time delay with time difference value with and 654 without the DUT. 656 5. Change the frame length and repeat from step1 to step4. 658 6.5.5. Test result format 660 ------------------------- 661 | Byte| Latency | 662 ------------------------- 663 | 64 | | 664 ------------------------- 665 | 128 | | 666 ------------------------- 667 | 256 | | 668 ------------------------- 669 | 512 | | 670 ------------------------- 671 | 1024| | 672 ------------------------- 673 | 1518| | 674 ------------------------- 676 Figure 10: test result format 678 7. Security Considerations 680 None. 682 8. IANA Considerations 684 None. 686 9. Normative References 688 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 689 Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, 690 July 1991, . 692 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 693 Requirement Levels", BCP 14, RFC 2119, 694 DOI 10.17487/RFC2119, March 1997, 695 . 697 [RFC2234] Crocker, D., Ed. and P. Overell, "Augmented BNF for Syntax 698 Specifications: ABNF", RFC 2234, DOI 10.17487/RFC2234, 699 November 1997, . 701 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 702 Network Interconnect Devices", RFC 2544, 703 DOI 10.17487/RFC2544, March 1999, 704 . 706 Authors' Addresses 708 Lu Huang (editor) 709 China Mobile 710 32 Xuanwumen West Ave, Xicheng District 711 Beijing 100053 712 China 714 Email: hlisname@yahoo.com 716 Rong Gu (editor) 717 China Mobile 718 32 Xuanwumen West Ave, Xicheng District 719 Beijing 100053 720 China 722 Email: gurong@chinamobile.com 724 Bob Mandeville 725 Iometrix 726 3600 Fillmore Street Suite 409 727 San Francisco, CA 94123 728 USA 730 Email: bob@iometrix.com 732 Brooks Hickman 733 Spirent Communications 734 1325 Borregas Ave 735 Sunnyvale, CA 94089 736 USA 738 Email: Brooks.Hickman@spirent.com